AI is now all over the place.
Phones offer to respond to a text on your behalf, computers have introduced new drafting tools, and many look at it simply as a new search engine—a replacement for Google.
These mini-revolutions taking place in every corner of our lives represent both a risk and an opportunity.
Specifically, I want to argue that while, in many cases--college, for instance--AI has made people's lives *easier*, it has not done so in the right sort of way. But I don't think this is an inherent failing, or, rather, that there is *no* good way to use these tools. We need to change the way we think about them. That is, at least, if we want to remain thoughtful.
I’ll start by looking at AI in the school setting, and examining how the debate is currently framed: as a dichotomy between augmentation and replacement. But I eventually, by analogies to train travel, argue that this distinction is unhelpful. In fact, the ‘augmentation’ approach is just replacement in new clothes. I then suggest an alternative.
AI In the School Setting
Essentially everything a college student was expected to do has become -- if they so please -- easier to do thanks to Artificial Intelligence.
This is wide-ranging and true at almost every stage of any process: brainstorming, outlining, writing, and editing, for instance. Each step can be augmented by AI or totally replaced by it.
Consider two approaches to brainstorming, for instance.
I have a back-and-forth conversation with a chatbot, attempting to stress-test an idea or explore my views on a topic of interest to me.
I don't really want to write a paper, so I tell an AI to go ahead and come up with an idea for me.
This distinction holds generally -- you can ask AI to code something for you, or do a math problem for you, or you can ask it to talk through the foundations of the problem.
And when people lament AI's use in schools, it is usually because they are worried that students are replacing entire parts of the process, rather than augmenting them.
We live in a new world, the thought goes. AI's ability to augment makes all these things easier, maybe go a little bit faster, and it might even make our answers that much better, more thought out. What is sad, according to this picture, is when people completely forego steps. When they cease to use AI as an augmentation and instead use it as a tool of avoidance.
Such a picture is compelling and hard to disagree with. This is in part because it is 'realistic.' Times change, new tools are created, and the question immediately becomes 'how can we best integrate these tools into our lives without hurting ourselves?' But I worry that the immediate disavowal of replacement and the turn to augmentation comes far too quickly. And it prevents us from asking what types of augmentation are acceptable and why.
After all, the move usually looks like this. "It is important that we know how to write, that we do some things ourselves. But a lot of the process is busy work, which can be eliminated or streamlined with the help of AI. As long as we stick to getting rid of that, we'll do just fine."
I want to question this.
Train Travel
Planes are much more efficient than trains. A train from New York to DC takes 3.5 hours, while a plane takes an hour and 15 minutes. Okay, maybe you're skeptical. If you arrive at the airport early and spend a good deal of time in lines, maybe it is a wash. This efficiency is felt across longer distances: a flight from Boston to Seattle is about 6 hours, while the train takes 74 hours.
There is a similar line of reasoning applicable here. The train is clearly slower, and at the end of the day, you're just trying to get where you're going (right?). So, it is best to just go ahead and fly, at least if you're going to Seattle.
But we've moved very quickly and forgotten that, in this case, the most efficient travel method being the best one is not beyond question. It is not even obvious that it is.
Consider: riding the train can be fun! You see things you might normally not; you go through places rather than fly over them. And we certainly would not want to imply that a road trip is not always misguided.
Ok, we moved too quickly. Yes, planes are surely more efficient. That is beyond dispute. But this does not mean they are an obvious replacement for train or road travel. In fact, in some cases, it is precisely because they are so efficient that they make for a *bad* replacement.
There are two responses someone might have to this. One is better than the other, I think.
The first is to say: fine, in light of what you said, I am going to continue taking planes, say, 75% of the time. But I will leave room for road or train travel. You're right that there's some value there.
The second is to say: Interesting (thanks!). You've introduced two competing qualities: efficiency and inefficiency. Now I am going to think about which fits best into the things I do. Perhaps I will take more road trips with my family (when the time allows), but I will ONLY take planes on business trips, where efficiency is more important. This is not, of course, the only way to distribute these; what is important is that our character is thinking about how to distribute the qualities altogether.
So, what has happened? Well, we began with a pure efficiency view--one might have thought that because planes are more efficient than trains, they are just all-around better. Then, we saw that the efficiency view was not quite right, that there were cases in which trains or road trips were not obviously ruled out as foolish. This left us with two choices. We could reflect on when efficiency or inefficiency are appropriate, or we could just bluntly decide to do a little bit of both, not guided by any principle.
Now, I want to suggest that the augmentation picture we began with has more in common with the blunt approach than with the reflective one.
Rummaging in the Library
Compare train travel and rummaging in the library. Both involve a friction of one form or another. Trains add friction to our lives: they are slow and take time to be done well (speeding through neighborhoods *or* sources never does anyone any good...) This friction creates space, within it, for a special set of activities: looking out the window of an observation car or stepping out at a stop for a breath of fresh air. In the same way, there is a certain friction to research, to ‘rummaging.’ We are looking for just the information we want, either in a book or on the internet. It can take a while, and, similarly, because of the friction introduced by needing to find the right books, right pages, and right words, the possibility for misadventure exists. You might stumble into a really interesting topic and jump down a rabbit hole.
The blunt view we discussed a moment ago failed to recognize this -- or, rather, it failed to bring us to think about the circumstance in which such misadventure is a good thing. And I worry that the augmentation picture of AI does the same thing.
Let's quickly create a caricature of the augmentation picture. Suppose we say that students need to write their final papers on their own, but they can use it for brainstorming initially.
Problem. As we've seen, even brainstorming contains the possibility for misadventure (in fact, it is here where misadventure is the most common!) and yet, it is precisely this step, according to the augmentation line-of-thought, that we would do well without.
By asking their AI of choice what it thinks about their paper, to suggest potential outlines, students forego getting an answer to a question they aren’t asking. And don't say that the same problem arises when we talk to other people. Because misadventures certainly do happen when we talk with others. Perhaps they mishear us, or something random comes to mind, or they think our idea is crap and we ought to do something else. But an AI that responds to your question with a tangent will likely fail to be useful more generally.
Now, to be fair, research misadventure isn’t always something we want. Sometimes we do just want the answer to the question we typed into the search bar. Maybe the paper is due the next day. But the old way of research allows for a certain spontaneity, a ‘stumbling into’ something you didn’t even know you were looking for.
And it is this fact that the current 'augmentation' view overlooks. At the end of the day, it really just *is* the efficiency view in new clothes. What are we augmenting? We're cutting out all the busy work. And what is the busy work? Precisely what, in other cases, is so important.
And this generalizes. Yes, AI might be able to come up with great responses to our text, but sometimes it isn't until we're sitting there, realizing that we do not know what to say to another person that we realize there is a problem. Possibilities for misadventure abound, even if they often stand in the way of efficiency.
We are thus faced with a choice. We can view ourselves as making plans – as creating new efficiency benefits that surpass the ‘old ways’ of doing research but also eliminate their friction. Perhaps AI really is just meant to replace these things, and, at the end of the day, we will not even notice them gone. Maybe.
But, unlike travel--which is constrained by physics (sadly, we cannot go both fast AND slow)--the tools we code can do two things at once. We can try to balance these different pieces: to leverage the power of AI to answer questions more efficiently, but also to think of ways to preserve the spontaneity involved in going down a rabbit hole.
AI as a General Tool
AI is not a tool made for students, but for all of us. As it advances, the same sense in which *everything* a student does can be done by an AI will become more general. An across the board reduction of friction in our lives, if you will.
And this is so appealing, I think, because we will suddenly be able to spend time doing what matters, to do the things that we are, now, too busy to do. In the case of students, this seems to be slightly mistaken. Or, at least, it rests on the assumption that there is nothing valuable to be lost in what we are automating. But, in fact, sometimes the very inefficiency's we hope to strip ourselves of *are* the very fruits of life!
The application of AI to any given field, then, is a decision we should not make quickly and without thought. We might consider, in all the areas we wish to touch with AI, the value of efficiency and inefficiency, the times when we may *benefit* from the possibility of misadventure, and how our new tools can provide for this. Otherwise, we risk losing something important.
In other words, we must choose between two possible uses of AI. There is the ‘collegiate’ use – we can use it to make our lives more efficient. But we must ask ourselves: what are we making more space for? We need to be careful, in our search for efficiency, not to eliminate the moments that make life so special in the first place: Searching, being confused, and going on misadventures. Though a thoughtful life, a life well lived, need not be full of them, it would be a mistake to try to eliminate them completely.
Interesting! This made me think about my approach to tea. Why would someone spend time brewing a cup of tea slowly, or grind coffee for that matter, when instant tea bags or coffee are so readily available? I think widening options from more to choose from is awesome!