Skip to main content

The SEC Newgate AI Weekly

AI Concept
By Tom Flynn
20 July 2023
Digital and Insight
artificial intelligence
ai
News

The latest developments in another hectic week in AI.

Apple in no hurry to ripen for market

Apple like to take their time and get their products as close to perfect as possible so it has been no surprise to watchers of the tech giant that CEO Tim Cook has adopted a more considered approach than some competitors. But that doesn’t mean Apple isn’t spending serious time and money on developing its own generative AI offer. According to Bloomberg, Apple has built a framework for large language models and is using it for an internal chatbot tool. There is no sign that Apple is yet ready to integrate the technology into iOS (or that it has a clear idea of how best to use it to supercharge its current operating system) but it is expected that they will announce something in 2024. They certainly can’t leave it much later than that to avoid the risk of being left behind.

Regulation, Regulation Regulation

AI is on the US regulatory radar this week, with Securities and Exchange Commission Chair Gary Gensler expressing concern about the risk of generative AI exacerbating “the inherent network interconnectedness of the global financial system”. Gensler is a long-term advocate of applying caution in the use of AI in financial markets so it’s worth watching the direction the SEC takes on this. Earlier in the week, the Federal Trade Commission announced an investigation into OpenAI, the organisation behind ChatGPT, looking specifically at potential harms from the chatbot’s tendency to provide false information. Here at SEC Newgate UK, we have been obsessed for some time now with the reputational impact of generative AI chatbot hallucination (for example, we’ve seen some genuinely jaw-dropping claims from Google Bard about us, our clients and their management teams), so this probe is a welcome starting point to ensuring that reputation is properly considered by OpenAI, Google, Microsoft, Anthropic and all others rushing products to market.

Does interacting with humans make AI less smart?

There has been discussion for a few weeks on social media and in online forums about the performance of GPT-4, with significant levels of anecdotal evidence suggesting that the model’s abilities have deteriorated since launch. Some have suggested that as the model learns from humans, its performance has declined. This week, researchers from Stanford and Berkeley released research which appeared to support claims that GPT-4 is less able to solve problems and generate accurate code. However, other experts say that this is a prompting issue, with simpler prompts no longer getting effective results, whilst more complex prompts are working fine. OpenAI themselves say it’s not true and that each version is “smarter than the previous one”. For now, the jury is out but if you haven’t already, maybe now is the time to invest in some advanced prompt training?

AP connects its API to OpenAI?

Associated Press and OpenAI have announced a partnership which will see ChatGPT trained on AP’s text archive, with OpenAI offering up technology and product expertise in return. Whilst there is no further detail at present, the use of non-partisan news content to train LLMs is likely to be a positive for both users and those concerned about generative AI bias.