On AI’s Non-Conundrum Conundrum

In one of his recent essays on the latest depredations of data-heavy, data-hungry, data-driven AI, a topic about which he has been writing quite frequently lately, and with great relish and not a little zest and wit, NYU Professor Emeritus Gary Marcus asked a question that is rarely ever asked by scientists and technologists: Yes, we can solve it (scientists) or build it (technologists), but should we? In the essay, Marcus asks: Yes, Google’s new robot project PaLM-SayCan “is incredibly cool” (his words) but just because Google can do it (create such a robot), should such a thing be done?

I say that it is refreshing to see such a basic question asked, and asked so cleanly, because such a question is, as I noted, very rarely asked. The received notion — notwithstanding all the post-modernist push-backs against naive belief in teleological progress, and the further push-backs from those who have successfully plugged many of the conceptual leaks of the first generation of thinkers who warned us against the belief that progress unfolds according to some internal, ineluctable logic — the basic reality remains that the dogma of technological eschatology still stands firmly entrenched: Of course the next version of the iPhone is going to be “better” than this one, just as this version is “better” than the one that preceded it!

Push back a little and ask, for instance, “Better for whom?” and wonder out loud, for example, if getting rid of the phone jack and forcing people to either buy a pair of AirPods or a splitting dongle (either of which can be lost easily, I can attest to that personally), if they want to ever again be able to listen to music and at the same time charge their phone, was a step “forward” and made things “better” for the user, and you will invariably get a puzzled look of,”What do you mean?”

In the essay, Marcus never satisfactorily resolves the question of whether or not we should build such an AI — which is not in any way to his discredit, for such questions are never resolvable by reasoning things through, one must be fair, and, in any case, as per Socrates, the value is in the raising of such questions so that some useful thinking may begin. And yet, I can’t help asking myself this follow up question: Is “To build or not to build?” the right question to ask given the realities or is the right question something a bit less provocative, a bit less exciting, far more mundane and yawn inducing, to wit: What regulations shall we start thinking about to ensure that we are not hurt when this technology is fully unleashed upon humankind (and the unleashing has started)?

Should We Build Bots?

For instance, in his essay, Marcus gives the example of a suicide hotline chatbot that takes a very bad turn when it responds to the question, “Should I kill myself?” with, “I think you should.” But is the correct reaction to this horrible technological failure to ask, “Should we build bots?” or is it the less dramatic one that asks, “Should we let organizations use bots in their suicide hotlines?” Bots that help people learn a new language, or answer questions about their water bill, onboard new customers and employees, fill out forms, or practice their writing skills, are certainly worth having around, so that when one is asked that same question — “Should we build bots?” — with these examples in mind, the answer is obviously a resounding “Yes!”

Which brings us to the larger question of “edge cases”: Often — and in his essays, Marcus is almost always guilty of this — the inability of AI systems to digest all situations that it encounters, including the “edge cases” — those situations that are so rarely encountered while the AI is being trained that the data-driven AI was never fed enough of the requisite data for it to learn how to deal with such rare encounters — is cited as evidence that such AI will never reach its much ballyhooed promises of solving all the problems that it will inevitably face, in all circumstances, at all times, and so on. 

But the “edge case” problem is “a big problem,” or, as Marcus puts it, “the core problem” only if the ambition is to build something that is expected to be omnipotent within its domain of action. Driverless cars that will drive anywhere a human can (and places the human can’t) may not be possible, but surely there is immense value in a driverless car for many situations where the variables are far less daunting, perhaps, and edge cases not as crippling (you take a good nap as your car drives through a long stretch of uncomplicated highway, and where in any case, whatever edge cases may be encountered will probably be better handled by the AI than by a human).

Silicon valley innovators (an Uber Director of AI labs a few years ago and Founder of San Carlos, CA-based Robust.AI, Marcus is certainly one of them) are usually not fans of regulation and are rarely enthused by plodding policy oriented questions and discussions. And that is no sin. It is in the nature of innovators to focus on “building”: What should we build, how should we build it, how do we price, market, and sell what we build, and so forth. Marcus has added to debate the question (intractable as it may be), “Should we build it or should we not build it?” and that is an innovation in itself, to be sure, and, again, to his credit. But still, the question remains one of the “Build” genus.

For the rest of us, the humble citizens who will have to live with these technologies with very little say in how they are made (and what say do we really have in any of the technologies that run our lives, really), technologies that are built high up above in virtual high mountains and delivered downstream to us in the name of Progress, the key questions that we must ask are: What laws and what regulations should we start thinking of passing, what type of controlling mechanisms should we begin investing in, and how should civil society begin to prepare itself for the many challenges ahead of us? And such questions we need to ask not to provoke new questions alone, but to settle on workable answers that will ensure that even if we don’t have much of a say in what is built (and what is not), we can have a meaningful say in how what is built is used, by whom, how and where, and for what purposes.



Categories: Conversational Intelligence, Intelligent Assistants, Intelligent Authentication, Articles

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.