A Fundamental Misunderstanding About AI
CEOs confused about their own products
It boggles my mind how the world bounces back and forth between fear-mongering and blind positivity regarding A.I. Every day, you hear that A.I. is going to take your job, yet it’s still failed to do so over its three-year explosion.
I’m not going to ignore the fact that AI has had a substantial impact on jobs, but that doesn’t mean it’s completely taking over them.
Programmers, for example, are impacted by A.I. because it can potentially help us do our jobs more efficiently. We can now write out the boilerplate for a program or script by asking an A.I. to do it for us and utilize packages that we might not know off the top of our heads by asking these robots to write them for us.
The one thing to note here is that we, the programmers, are asking the A.I. to write this; it’s not the A.I., not the CEO of that company, but the programmers.
SXSW
I was reading on Variety today about one of the sizzle reels before screenings of “The Fall Guy” and “Immaculate.” In the article by Michael Schneider, he quotes the VP of consumer product and head of ChatGPT, Peter Deng, saying “I actually think that A.I. fundamentally makes us more human.” What Peter Deng said gave me pause, as the internal screaming had gotten so loud I couldn’t think or move.
Once I calmed down, I laughed at the fact that this was said in front of an audience of writers and actors, as Schneider snarkily pointed out.
One thing that kept bothering me was whether Peter Deng even understood his own product.
A Fundamental Misunderstanding
Don’t get me wrong—A.I. is a powerful tool. As mentioned, it’s helpful when programming, it does a very good job at summarizing, and if you need a quick bit of inspiration, A.I. image generation can make it in a snap. However, none of these can replace a person.
To the naysayers, A.I. is supposed to be the end of work, creativity, everything, and yet it isn’t. This reminds me a lot of video games.
Video game companies’ promises
Video game companies are notorious for making huge promises and delivering terrible experiences. Take Anthem, for example.
Anthem was designed to be a looter shooter in Iron Man suits; pretty cool, right? The game plays decently well, has good mechanics, and fairly decent graphics. The problem? Anthem was an absolute waste of potential.
The creators were given 7 years of development time, 5 of which were spent debating the color of grass. When their parent company, EA, started asking for updates, they quickly ended the debate and created a trailer for a game for which they had absolutely no plans whatsoever.
When released, the game was received poorly and shut down after only a few updates. I made the heinous mistake of pre-ordering based on what I was promised: an amazing game. I relinquished my hard-earned cash and bought into this absolute waste of potential.
Much like the video game companies, OpenAI, Midjourney, Google Bard, and the other big-name A.I. companies are selling potential.
Selling potential
The current version of A.I. we’re using isn’t even really A.I. In my previous post, A.I. Won’t End Creativity, I mentioned that it’s a bit more than the predictive text generator on your phone, which is true. “A.I.” as it currently stands, is only a piece of what’s needed to reach Artificial General Intelligence.
It’s like this: say you’re in New York, and you want to head to Las Vegas, so you head west. You suddenly end up in Anchorage, Alaska, and now you’re calling it Las Vegas. Yeah, Anchorage is west of New York, but that doesn’t mean that you are in Las Vegas This is pretty much what OpenAI and other companies are telling you right now.
Yeah, LLMs and Neural Networks are likely key components of A.I. and AGI, but that doesn’t mean they are A.I. or AGI.
In Conclusion
Realistically, one tiny article won’t stop these behemoth companies from spouting utter nonsense. Hell, bigger publications have tried, and those companies still pile on the BS. However, you can stop listening to them. Educate those you can and reassure those who believe this nonsense threatens their livelihoods.
I’m seeing gen ai being used against deeply unstructured data, like customer complaints that are free form, to good use. And also back office automation, which should have been automated using ML techniques but companies didn’t invest, and now are crowing about how great ai is to solve such problems.