In my previous post, “Artificial Intelligence and the Practice of Law,” I discussed how ChatGPT has ignited tremendous interest in general artificial intelligence. Additionally, I advised firms to take a cautious approach to the adoption of the technology by highlighting a New York lawyer’s misadventures with ChatGPT. That lawyer was sanctioned for failing to verify content the AI created before submitting it to the court. While ChatGPT can do wonderful things, it can also lie to you with incredible confidence. When AI lies to you, this is called a “hallucination.” Always independently verify anything that an AI tells you.
In the time it took you to read the paragraph above, AI robots created a vast array of newly generated articles and published them throughout the internet. This is generally being done to improve SEO rankings of myriad websites, though I am skeptical of just how effective this will be. This is because I am dubious how much of this content will be high quality. I ended my earlier article with, “No chatbot was used in the creation of this article! I wrote it all myself.”
But there is no denying that AI has enormous power in creating content. So, for this article, I attempted to use AI to create it! Here’s how that went. First, I have high standards for anything I publish. So, at a minimum, I planned to thoroughly review and edit anything AI-created. If I was lazier, with lower standards, I could have “just phoned in” this article and let the AI write it and move on. Especially after giving the AI a little feedback to correct mistakes and refine what it created, it would have passed the “it looks reasonable” test.
In the end, my higher authorship standards compelled me to write the article entirely myself. Beyond some example text that I will clearly identify as AI created, I did not use any AI-generated content. But the process I went through in using AI was illuminating. This is the story of that journey.
For this experiment, I used three general-purpose AI tools:
- ChatGPT 3.5 (free)
- ChatGPT 4.0 ($20/month subscription)
- Bing Chat (free)
ChatGPT 3.5 is what started the surge in excitement about AI, and it is the most widely used of these three options. So, it seemed like the logical place to start. You can give simple commands to ChatGPT, and you will get responses that may or may not be useful. You can then respond to the result to refine ChatGPT’s response. In general, the more explicit you are in what you are looking for, the more likely you will get something closer to what you are looking for. So, I asked ChatGPT 3.5:
write an article for a newsletter written in the style of popular authors of the wall street journal or the american bar association magazine about the pros and cons of using chatgpt in a law firm setting discuss hallucinations compare chatgpt 3.5 versus chatgpt 4 write this in a very balanced fashion where you are careful to not oversell the features of chatgpt and version 4 versus 3
These very explicit instructions were designed to reduce the amount of time I would waste giving feedback ChatGPT asking it to tweak the results. ChatGPT’s response began:
The legal industry has always been at the forefront of embracing cutting-edge technology to enhance efficiency and improve client service.
ChatGPT hallucinated in the opening paragraph of my first attempt to use it to create an article! I responded to ChatGPT:
The sentence “The legal industry has always been at the forefront of embracing cutting-edge technology to enhance efficiency and improve client service.” is factually incorrect. While law firms will use technology, like any business will, they tend to be extremely conservative and risk averse. Use this information to update the article.
ChatGPT 3.5 did not handle my feedback as gracefully as I was hoping. It literally replaced the bad sentence using my words – word for word. It did not understand the editorial concept I was trying to convey, so it made no changes to the rest of the article. After provided ChatGPT 3.5 several different sets of additional feedback, it kept improving the article little by little. While the article was “reasonable,” it was not anything I would feel comfortable publishing without extensive rewrites.
The paid version, 4.0, provides you an “incognito mode” where you can optionally turn-off “chat history & training.” While this is not a perfect solution, if you are careful with the use of this setting, this version could potentially be more appropriate for legal work than the free version. After providing ChatGPT with my credit card, I tried the exact same commands with version 4 that I gave 3.5. The responses from version 4 were far more refined.
Is it worth $20/month? The experience is noticeably better, so it likely will be for some users. Version 4 did not lie to me by making up that bit about the legal industry always being at the forefront of embracing cutting-edge technology. It also took far fewer follow up commands to create a “reasonable” article. The final article was higher quality than what I created using version 3.5. But it still was not good enough for me. And, in this case, I just don’t think that either version of ChatGPT knew enough about the latest developments, because it’s understanding is limited to information from September 2021 and before.
Artificial Intelligence creates unpredictable results. If you are not extremely careful, these results can be truly horrific. I’m talking about racist, sexist, and just about any other bad results one might imagine. One of the ways that OpenAI mitigated this problem is by performing its base training of versions 3.5 and 4 on a “frozen” data set snapshot from September 2021. OpenAI had teams of people review ChatGPT generated content for over a year and worked to eliminate these unacceptable results before opening access to the public. While limiting what ChatGPT knows made solving the horrible result problem simpler, ChatGPT is hampered by not knowing current events.
Relevant facts about artificial intelligence change daily. I was able to work around this limitation by copying and pasting text from more current articles in my sessions working with ChatGPT, but the effect of this is limited. It simply made it too difficult to use ChatGPT directly from OpenAI to be able to create an article on cutting edge technology that could meet my standards.
In late January 2023, it was announced that Microsoft invested ten billion dollars in OpenAI for a 49% stake in the company. Shortly after that announcement, Microsoft released Bing Chat. This is a chatbot based on ChatGPT version 4 that is integrated into the Bing search engine. By combining the power of ChatGPT-4 with real-time access to the internet, Bing Chat provided the most impressive artificial intelligence experience of the three options. After several rounds of feedback to Bing Chat, I was able to use it to create a decent article. While I ultimately chose not to use the article, it was good enough where I could have used it as a starting point and it likely would have saved me time.
While none of these technologies are perfect, Bing Chat was the clear winner of the three in this experiment. Additionally, while you should always err on the side of caution, Microsoft provides the strongest indications that they are working to keep information shared with Bing Chat private. Microsoft recently released a preview version of the technology called Bing Chat Enterprise that will be included in the Microsoft 365 Standard, Premium, E3 and E5 subscriptions, where they tout “Commercial Data Protection” as the key feature. I will be investigating this further and will keep you updated in future articles.
If you have any questions about anything in this article, artificial intelligence, or business technology in general, feel free to reach out to me at firstname.lastname@example.org.