Some More Thoughts On “AI”
Written by Ben Esplin
I just returned from the HumanX Conference on “AI” this week. At the conference, I spent the better part of two days dropping in on panels, lectures, and Q&A sessions to listen to thought leaders in technology discuss various aspects of “AI.” In many ways, it confirmed my instincts about the way people are thinking about this technology as we near the beginning of the end of its hype cycle.
I have written previously about “AI,” and the way this technology eludes efficient discussion. As used in most of the conversations I heard or participated in, “AI” was intended to mean generative artificial intelligence using trained models, which are still fairly new. The technology is difficult to grasp, and therefore to discuss, because it can be implemented in software systems and workflows in ingenious and sophisticated ways by developers who barely have a working understanding of the underlying technology. Even a developer who has downloaded a model, gotten it running (in the cloud or on prem), and trained or fine-tuned typically does not understand how the model actually functions in depth. In other words, there are multiple levels at which someone may be fairly sophisticated in “AI” while still being relatively naïve about one or more deeper levels.
As a result, “AI” inspires awe and hope which tends to be disconnected from meaningful insight. As an example, one VERY famous venture capitalist who shall remain nameless here stated he believes within the next 20-25 years bi-pedal robots under the control of “AI” will be doing more manual labor than all of the humans in the world at the present time. I find this preposterous. At present, robots are not controlled by the types of models commonly referred to as “AI.” But even if we assume models capable of controlling bi-pedal robots will be developed in the next 20-25 years, I find the prediction completely unrealistic. The army of bi-pedal robots required for this amount of mechanical labor would need an army of human mechanics and technicians of almost equivalent size just to maintain it, much less to implement it. Then there are the power requirements of not only the robots themselves, but also the compute required to run the as yet undeveloped “AI” models capable of providing autonomous or semi-autonomous control of the robots.
For me, the biggest takeaway from the conference was the acceleration of capabilities provided by “AI” in the past several year, though it appears to be slowing, has outstripped our practical capability to keep pace. At this point, “AI” still has the frothiness of Web 2.0 in 2007 or blockchain in 2012. As the technology matures, it will be fascinating to see how much of its early promise can be realized in the short to midterm.