Well, that generative AI thing got real pretty quickly
From initial concerns to serious questions that remain, Hugh Miller looks beyond the hype and controversy of generative AI to reflect on where we really are and where we may be headed.
It’s now been about six months since OpenAI unveiled ChatGPT. This wasn’t the first text creation model, but it feels like the moment that the world stopped and took notice of generative AI.
A technology that produces a passable limerick on quantum physics probably deserves the attention. So, too, image generation that can put puffer jackets on Popes. Thousands of articles have already been written about generative AI and its implications. No doubt, a fair few of the articles were already co-written by ChatGPT and its competitors*. Now that we’re (hopefully?) nearing the end of the initial hype cycle, we can reflect on some implications that are now much clearer.
Dispelling fears about big tech and model misinformation
First, there was initial concern that large AI models, including ChatGPT-style Large Language Models (LLMs), would become the domains of a small number of large technology companies. Happily, it seems that this is not the case – several nimbler startups have emerged, and the speed of other companies releasing products suggests that the barriers to entry for the technology are not too high.
Second, there was a lot of concern about the impact of hallucinations and misinformation flowing from these models. Here I mean the type of misinformation where a model directly gives poor or wrong advice that turns out to be harmful (as opposed to cases where people deliberately use the technology for their own nefarious purposes). By and large, I think the technology has passed this test. While some unhinged behaviour has emerged, it does not seem to be a rampant issue and users have seemed well-informed enough to understand (and appreciate!) the imperfections. Vigilance and education is still required, but it hasn’t led to planes falling out of the sky (April Fools articles notwithstanding).
Third, it is a technology being recognised as a big deal by the people who should know about these things. Most of tech’s elder statesmen seem to think so. OpenAI and Microsoft’s progress was enough to send Google into a panic, attempting a gaggle of AI innovations in rapid time – hopefully, with more success than the company’s haphazard social media efforts, a similar panic response to Facebook’s rise.
Positives and negatives for education
Fourth, the implications for education specifically seem profound. There are clear positives – the technology can be used to create a private tutor that can dynamically offer advice on work, and show much greater patience than regular human tutors (or regular human parents, for that matter). On the negative, we have an education system that relies heavily on assessments, and many of these are no longer fit for purpose when a tailored essay is a few keystrokes away.
“Will AI create jobs, morph them or destroy them? Time will tell, but there doesn’t seem to be a need for immediate pessimism.”
Questions remain. Perhaps the biggest is we still don’t have clarity on what it all means for jobs and services going forward. For LLMs, the fundamental question is what contexts can benefit from a very intelligent chatbot whose accuracy cannot be guaranteed. Examples already exist, such as improved productivity for online chat and incorporating AI-suggested text into workflows. Computer coding similarly seems a space where a dynamic virtual assistant can increase speed. General white-collar work may see productivity gains (or maybe just more polite emails), whereas my plumber will not see a massive impact. And it’s unclear how big the market is for self-driving, AI-powered prams. Will AI create jobs, morph them or destroy them? Time will tell, but there doesn’t seem to be a need for immediate pessimism.
The effects of content creation at speed
One change that we are already seeing is an internet with even lower barriers to content creation – whether articles, images, videos, ads or other forms of content. A small team can now build a large website with hundreds of articles very quickly. These can be geared towards specific searches, crowding out existing content. Whether this becomes an unmanageable deluge, with a declining ability to search for quality content, remains to be seen. Perhaps improved search AI will counterbalance the increasing challenges of separating out quality.
Transparency, regulation and a place for humans
The other big question is around transparency and regulation. If hundreds of new models are produced, many of them private, then how do we know when AI is driving decisions and how can we be confident of the basis for decisions when this happens? And relatedly, what are the implications of private web-scraped data collections being used to create the models? This is all a tricky space, where regulation is likely to move rapidly, and at different speeds around the world. As ever, Europe appears to be on the front foot in nailing down what a world with AI should look like.
My approach? I think taking a natural interest in how things are evolving is important – some things will change rapidly – and it helps to be forewarned. But for now, I’ll assume the world will continue to need human-led, data-based analysis and advice – and I’ll need to make sure I’m making use of the best tools and technologies to provide them.
* I did not use AI to write this article. But I did ask ChatGPT to rate it out of 10 – and got an 8. And then 7.5. And then 8.5.
This article first appeared in the Actuaries Institute magazine, Actuaries Digital, in Hugh’s regular column, Normal Deviance.
Other articles by
Other articles by Hugh MillerMore articles
Inequality Green Paper
calls for government policy reform to tackle economic equality gap
In a Green Paper commissioned by Actuaries Institute, Hugh Miller and Laura Dixie, analyse the impact of economic inequality in Australia.Read Article
Disability Royal Commission releases Taylor Fry’s report on the economic cost of violence, abuse, neglect and exploitation
Taylor Fry’s report reveals the economic costs of systemic failures experienced by people with disability in Australia.Read Article
Related articlesMore articles
How AI will be impacted by the biggest overhaul of Australia’s privacy laws in decades
Understand the key changes to the Privacy Act 1988 that may impact AI and how organisations who use AI can prepare for these changes.Read Article
Building cyber resilience – 4 critical steps for boards
How can organisations avoid the increasing threat of cyberattack? We joined a recent Actuaries Institute discussion and offer some key stepsRead Article