Big tech is charging ahead in LLM development. Around a decade after virtual assistants like Siri and Alexa were introduced, a new wave of AI helpers with more autonomy is raising the stakes. Experimental systems that run on GPT-4 or similar models are attracting billions of dollars of investment as Silicon Valley competes to capitalize on the advances in AI.
Amazon’s cloud computing division released a suite of technologies aimed at helping other companies develop their own chatbots and image-generation services backed by AI. Microsoft and Alphabet are also considering selling the underlying technology to other companies through their cloud operations. Microsoft has now launched Copilot, which integrates generative AI in Windows 11 applications, including Photos, Snipping Tool, and Paint with generative AI capabilities such as text-to-image.
Read more: Impact of predictively profit loving big tech layoffs on job seekers & stuff
Apple is quietly working on AI tools to challenge Microsoft-backed OpenAI Inc., Google, and others. The company built its own framework to create large language models and its own chatbot service, which some engineers are calling “Apple GPT.”
Also, it seems Meta is going to release a multimodal version of Llama shortly. In July, the tech giant brought out CM3leon, which does runs text-to-image and image-to-text generation. In May, it also unveiled ImageBind, a first of its kind AI model that can bind data from six modalities (images and video, audio, text, depth, thermal and inertial measurement units (IMUs) at the same time, without any explicit supervision.
The speed at which big tech is pursuing success in LLMs makes you feel like calling out ‘steady now.’ But no one wants to be left behind, so no one will listen
Meanwhile, OpenAI finally launched multimodal ChatGPT, which integrates image features into the chatbot by integrating Dall-E 3 with ChatGPT Plus and ChatGPT Enterprise. This will usher in many new image-based applications for GPT-4, for example, generating text to match images.
This is coming just before Google is launching its Gemini, which is also expected to have multimodal functionality.
LinkedIn launched a pretty effective AI growth loop too.
In India, in August, Tech Mahindra launched Project Indus, which the company is using to create its own advanced language model to improve communication across many Indian languages, starting with Hindi. In the first phase, it aims to cover 40 Hindi dialects (Kinnauri, Kangri, Chambeli, Garhwali, Kumaoni, Jaunsari and more), and then go on to other Indian languages and dialects, catering to 25% of the world’s population.
Infosys co-founder Nandan Nilekani-backed AI4Bharath is also onto similar development, which is working on creating open-source datasets, tools, models and applications for Indian languages. Last year, the Indian government also brought out Project Bhashini, in partnership with Microsoft, establishing language datasets and AI applications with crowdsourcing initiatives such as Bhasha Daan and others.
The race for the best LLM is heating up. The speed at which big tech is pursuing success in LLMs makes you feel like calling out ‘steady now.’ But no one wants to be left behind, so no one will listen.
When the likes of Geoffrey Hinton, an AI pioneer says he quit Google to speak freely about the technology’s dangers, after he realized that computers could surpass people in smartness way earlier than he and other experts had expected, it feels like a ‘steady now’ moment.
Read more: Big tech & India work towards language inclusive tech
AI models are getting so good that, according to the Guardian, they have more than 90% accuracy “listening” to keystrokes to identify typed content. “Typing in a password while on Zoom call could spell disaster,” it says.
But then, none of these big companies reached where they are by listening to the ‘steady nows’. Microsoft Founder Bill Gates said in July, “The risks of AI are real but manageable.” “Today’s and tomorrow’s AIs might be unprecedented—but nearly every major innovation in the past has also introduced novel threats that had to be considered and controlled. If we move fast, we can do it again. If we manage the risks of AI, we can help ensure that they’re outweighed by the rewards (of which I believe there are many),” he says. This makes sense too.