All you can AI in 15 minutes. Let's start by talking about what AI is and isn't. Then we will show how AI tools can be used to facilitate more than just day-to-day project work. Finally, we'll delve into the ethics of AI and shine a light on the challenges we may face when using it.
Introduction to AI
Let's start with a bit of theory. We generally divide AI into Narrow and General. Narrow AI can perfectly perform some narrowly defined task or set of tasks. Such intelligence has been in our lives for years, such as FaceID, spam filters and many others.
General AI, on the other hand, shows signs of intelligent behavior across a variety of cognitive tasks. Such AIs do not yet exist and are unlikely to exist in the next decade.
But why are we talking so much about AI lately when it has been with us for years? And there won't be a proper one for another decade? Primarily because tools like Midjourney, Stability AI, ChatGPT or Github Compilot are generative, meaning they create some new content based on data.
And why does this interest and scare us so much at the same time? Primarily because we consider creativity to be inherently human.
Chat GPT cannot be trusted
Let's start with ChatGPT. The New York Times asked ChatGPT (version 3 ☝️) to write an essay about Antoine De Machelet, a Belgian chemist. He immediately began generating an essay, but it was entirely fictional. This probably shouldn't surprise anyone - ChatGPT is just a language model, as its name implies (GPT stands for Generative pre-trained transformer).
The main problem is that ChatGPT is a Narrow AI focused on a set of specific tasks - understanding and generating text. It says nothing about the veracity of the generated text.
AI != ChatGPT
There really are a bunch of AI tools out there. Lately, there's no hiding from them on LinkedIn. And all of us certainly use dozens of them on a daily basis, consciously or unconsciously. I, for example, work with Google Sheets every day. Whether I'm exporting something there for clients or need to process data for a decision.
Arcwise is an add-on to Google Sheets that can explain what a sheet is for, for example, or perhaps edit, unify or scrape data from an open tab.
I simply ask "What does this sheet do?" and I get a simple answer explaining the main functions.
I also attend several meetings each day. It's usually part of my job to write down important points, hand out other tasks, and maintain the context of the project.
Otter.ai (but also other tools like MeetGeek or Firefly) can join a meeting, automatically record it, make a transcript and then write a meeting summary. In addition, you can then search the transcript of all meetings.
One day there will be a complete AI archive that you can link to meeting transcripts, project documents or ticketing and we can easily search it.
The ethics of AI
Artificial intelligence is relatively new in our lives and essentially unregulated. It's also good to know the issues that AI brings and how to deal with them ethically.
Returning to the NYT case with the Belgian chemist: why did ChatGPT respond as it did? Why did he completely make up all the data? And does that bother us? Maybe a little. But it's more of a funny story. But what if we wanted to know why the Decision Support System recommended a given decision? Or why does our Applicant Tracking System recommend rejecting a given applicant? We would also ask the HR specialist why.
This problem is known as the black box problem. No one, often not even the developers, has any idea why the AI responded the way it did. The solution to the black box problem is Explainable AI (XAI). This is because we are often not just interested in the answer, but how and why the AI arrived at it.
The second problem I will present is called garbage in, garbage out. This is a fairly well-known computer science principle. However garbage in does not mean typing it into ChatGPT and garbage out its response.
Garbage in is training data for machine learning algorithms. If the input data is not of sufficient quality, the results will not be of sufficient quality either. Bad data may reflect our previous bad decisions or our own biases. They may underrepresent women or minorities. These problems will only be compounded by artificial intelligence.
What can happen if I don't pay enough attention to the learning data? Like this:
- iPhone X racism row: Apple's Face ID fails to distinguish between Chinese users
- Self-driving cars more likely to drive into black people, study claims
- Uber’s Self-Driving Car Didn’t Know Pedestrians Could Jaywalk
AI in project management
Artificial intelligence can save us a lot of work. And it can save us work we often don't want to do. But we need to approach the results critically and be aware of the limits that AI has (at least for now). For it is itself stupid in ways we cannot understand. And I'll be glad if you approach it that way.