Botanalytics Blog
January 2017
« Dec   Feb »


Bot Enthusiasts : Chatbots with Clive Thompson


This post is published on Chatbot’sLife.

Botanalytics presents: Interviews with Bot Enthusiasts. Our second guest is Clive Thompson . He writes about how technology affects everyday life. He’s currently working on his next book, which is about “how programmers think”. He’s a contributing writer for the New York Times Magazine and Wired, and author of Smarter Than You Think: How Technology is Changing Our Minds for the Better.

bot enthusiasts clive thompson

Clive Thompson

Botanalytics: How do you see the place of chatbots in people’s lives now?

CT: Chatbots go back to the 90s, when people first started using chat programs en masse, like AOL Instant Messenger — and programmers started making simple chatbots to have fun. These chatbots were mostly just for entertainment.

Today, chatbots are re-emerging in a more utilitarian role — they’re a way for people to interact with computers and services. When someone asks Alexa to set a timer or tell them the weather, they’re basically using a chat to *get something done* … which is quite different from merely using it to entertain yourself. I’m seeing a lot more attempts to create these “useful” chatbots, particularly on Slack, where many companies increasingly run their internal business. On Slack, all the communications are text-based anyway — so they’re a native environment for a chatbot to exist.

Botanalytics: What is the behavioral motivation to use chatbots now and will be in the future?

CT: In one sense, the motivation is different from in the past — because as I said, it’s going from “entertainment” to “utility”.

But one thing hasn’t changed, and probably won’t change: Our desire to use everyday language to interact with a computer. It’s a lot easier, for certain tasks (i.e. when your hands are otherwise occupied) to ask a computer for info and hear it back.

Botanalytics: How can chatbots be valuable to users?

CT: The short answer is: By using them whenever you can make a task easier, richer, more useful or more entertaining.

The middle answer is: Designers have to watch out for when they use chatbots, because these use-cases I mention above are really quite different. They don’t always reinforce each other, and can sometimes break one another. (Making a UI “entertaining” can make it less useful, for example.)

The long answer is: Right now, a lot of chatbot experimentation is actually pretty bad — it’s companies throwing a chatbot layer on top of a service instead of just making a nice, usable interface. That’s the way this always works; when a tech becomes hot and new, you have a million experiments, 99.9% of which are terrible, and then someone stumbles onto the genuinely useful/great 0.01% of use-cases and everyone follows suit. So the long answer is, “I don’t know” how chatbot will turn out to be truly useful … I’ll have to wait for the field to mature more.

Botanalytics: You’re writing a book how programmers think.. We can say A.I. may become future’s programmers sometime. Can you compare human programmers thinking and future-A.I. programmers in some points?

CT: It’s really hard to compare how “future” AI will write programs, because I don’t know what far-future AI will be capable of.

But in the short run, here’s one observation. Computer programming is really just the art, craft and science of issuing commands to a computer to tell it to do something. In the old days you had to issue incredibly detailed commands — like, where to put each individual character on a screen, or how much memory to allocate to a particular routine. Over the next few decades, computer languages (and computer systems, including operating systems) evolved so that a programmer didn’t need to worry about such low-level things … and computer languages evolved to be more “abstract”. They began to resemble human language, in a way.


Tim O’Reilly told me last spring that he thinks chatbots and AI are creating a style of everyday computer programming that everyday people do — even though they don’t *think* of it as computer programming. For example, if you use Google Calendar or Alexa, you can issue simple, real-language commands that tell the computer to, say, remind you to do something in three days. That is, O’Reilly points out, a form of computer programming: You’re issuing a little script for the computer to follow — it’s just that you’re doing it using your voice instead of, say, typing a “cron” job into a UNIX system.

O’Reilly thinks that AI will allow chatbots to become ever more fluent at parsing human intent, and so we’ll evolve programming to become more like talking to a computer: You’d tell it all the specs of the task you need it to do, and it would use AI to assemble a bunch of routines that do that.

Mind you, folks who worry about AI taking over are concerned about the idea of AI programming other AI. Their concern is that an AI that could really do open-ended programming could rapidly figure out how to program an AI that’s smarter and more capable than it is, and on and on, so that it could evolve — in a few milliseconds — a trillionth-generation AI that is so capable that we can’t predict or understand its actions.

I worry a bit less about that latter scenario, though it’s obviously one to keep one’s eye on.

Botanalytics: Do you use any chatbots right now?

CT: Mostly only Alexa. In my household, we use Alexa mostly to set timers — it’s very useful and faster than setting on one your phone. Our second-most-common use is that my kids use it every morning to tell them the weather. Third is using it to play songs on Spotify. And a distant fourth is using it to google something trivial — like how to spell a word.

I tried using the Quartz news chatbot, but found it far too slow — I prefer to rapidly read/skim news myself. This is a problem with current chatbot design: It’s a conversational model, and certain tasks, like imbibing news, don’t work so well as a conversation, for me.

Botanalytics: Do you think chatbots will become a must in people’s daily lives?

CT: Probably, though not necessarily in a good way. If we’re lucky they’ll become a “must” by dint of being really superior to other ways of interacting with a computer — they’ll be more useful, more fun, and richer than traditional interfaces like the keyboard and screen.

If we’re unlucky, they’ll be a “must” because companies deploy them to get rid of human labor — essentially forcing us to use them. Think of the way that so many companies have gotten rid of human on-the-phone helpdesk staff, and instead force customers to wade through useless, horribly-designed phone-tree “help” lines that are, in essence, chatbots: Big computer systems programmed to give a limited bunch of replies to your inputs. Frankly, even if you get to talk to a human on the phone, they’re often so ill-trained that all they can do is take your question, type it into a database, and read back the answer they get without really understanding it. They are, to all intents and purposes, acting like a chatbot themselves.

So chatbots could beome a “must” in the most terrible way possible: If we’re forced to use lousy ones so that companies can save on human labor.

Botanalytics: As you know innovation happens so quickly. How do you think people will adapt the changes of technology as a cognitive way? 

CT: Historically, our pattern is pretty regular: People in their 30s and 40s freak out when a new technology comes along, because it disrupts their everyday behavior. A chunk of them decide they like it; the rest complain bitterly, until they di, about how much simpler life was when they were young. In contrast, children and people in their early 20s are more adaptable, so they more enthusiastically adopt a new tech — with one problem: Because they have no basis of comparison (the technology has “always been there” in their teen and adult lives), they have trouble seeing how it has changed society, for ill and for good.

The deeper question is how *should* we adapt? I tend to say: It’s good to experiment with new communications and computational technologies, so we can kick the tires and see what they’re good and bad for. We should also *step away* from new technologies frequently so we can remind ourselves of those two things — what they’re good for, and what they’re bad for.


The problem often becomes that a new tech is adopted so quickly that it’s hard to step away. Stepping away from Facebook today, for example, means ceasing to engage with your friends in the manner they prefer. So the third thing we need to do (and this one is hard) is push our political representatives to pass laws and regulations that even out the balance a bit, and make sure that the civic world is on the everyday citizen’s side. That includes everything from, say, EU-style laws that require tech firms to design with our privacy in mind, to the government pursuing antitrust cases if a firm becomes so big it “locks us in” to their mode of doing things.

And read, read, read. It’s alway good to educate yourself about how technology affects everyday life!

Comments 0
There are currently no comments.