On my winter break reading list, I had Co-Intelligence: Living and Working with AI by Ethan Mollick. If I were really following the advice of the book, I would have AI take a first cut at this review which I could then finalize. That would be a pretty good way to do it. As Mr. Mollick points out, current LLMs are excellent at creating summaries. However, as befits the logo at the bottom of this page, I’ll do it the old fashioned way.
Co-intelligence is aptly titled as most of the book is about how to understand and incorporate AI into your work. He is realistic about the state of play for current AI models and offers some solid advice about how to use them effectively. He presents four rules to work with AI.
First, try to use it for your tasks to gain experience on how it can help you do your day to day work. That seems like solid advice for anyone. Give it a try and see how it can help. My experience is mixed, but lines up fairly well with his descriptions: with the right prompting on the right tasks, AI is very interesting and useful.
Second, he advises that you put yourself in a good position for further adoption of AI in your organization by being “in the loop”. If you follow the first rule effectively, that will put you in good stead as AI is further adopted. That also seems like sensible advice. AI is an important new technology and will change the way that knowledge workers work. Being at the forefront of that is a good strategy.
Third, although there aren’t many hard and fast rules with using AIs effectively, one that Mr. Mollick proposes is give the AI context and a persona when you are working with it. So, don’t just ask, “What was Bob Dylan’s impact on rock and roll?” Instead, do something like “As a respected musical historian with an in-depth knowledge of modern musical history, describe Bob Dylan’s impact on rock and roll.” That’s an interesting approach and does seem to actually work pretty well for me in some brief experimentation with ChatGPT.
Forth, assume that this is the worst AI you will ever use. As the models evolve, their work will get better over time. He gives some examples of responses from ChatGPT 3.5 compared to ChatGPT 4 to illustrate the difference a year makes. It is quite striking how much better the models have gotten in that time frame. All evidence is that they will continue to get get better.
The book makes some good points about how we have to change the way we work and think to adapt to new technologies (e.g., using calculators and spreadsheets) but that knowing the basics is still important. In order to spot mistakes or hallucinations made by AI, you need to have a grounding in the subject matter. These changes will happen slowly and organically at the grassroots and not through the forward looking statements of CEOs.
Expecting AI to do all of your thinking for you is not workable. If it becomes so, they you might be vulnerable to being replaced by an AI as he posits will happen with call centers and other script based jobs currently handled by humans.
He closes the book out by positing a few different paths that AI might take in the future. One is that AI stagnates where it is and makes very little improvement going forward. He sees this and unlikely. Even in this case, what we have is useful enough to engage in on tasks and will have some impacts going forward.
The next case is that AI improves linearly by 10% or so per year. In that case, there are some dangers about adoption and potential for disruptive use by criminals, terrorists or adverse state actors. However, AI can also be used to counter those disruptions and deliver real productivity gains. This change will be destabilizing but controllable.
After that, Mr. Mollick posits exponential growth in AI that stops before AGI or super intelligent AIs. That is the second case on steroids with more potential for disruption but also more benefits. These two cases are the ones that he puts the highest likelihood on.
Lastly, he discusses the case where AI really takes off and we get AGI or super intelligent AIs in short order. This is a wild card. There is really no telling how that would work out. It could be the end of the human species as some doomsayers posit or it could be the end of all of our human problems as God-like AIs figure out how to provide for us.
Co-intelligence was an interesting and timely book that gives some good advice about working with AI. I’ll take it to heart and look for more ways to use AI tools to good effect.