Less than two years ago, Meta – the parent company of Facebook – announced plans to go "all in" on virtual reality and the metaverse. With consumer engagement on those two initiatives so far proving underwhelming, more recently, it has focused efforts on the current hot topic of the technology world – generative AI.
Generative AI refers to a trending class of machine learning applications that are able to create new data, including text, images, video, or sounds, based on a large dataset on which it has been trained. Examples of generative AI applications include ChatGPT – the fastest-growing application of all time, as well as image creation tools such as Dall-E and Stable Diffusion.
Experts now predict that this technology will disrupt every industry, impacting the products and services we consume, as well as the way we work. So here’s a look at some of the ways that Meta is implementing these powerful tools across its platforms, as well as some ideas about how it might impact its ongoing plans to launch us all into the metaverse.
Facebook – Meta’s biggest platform and the world’s biggest social network – primarily makes money by allowing businesses to advertise on its pages. Now it has said that it will give those businesses generative AI tools as the first commercialization of its own generative AI technology.
It's believed that it will release tools later this year that will allow companies to automate the creation of multiple versions of adverts featuring different text and images aimed at different audiences. It could automatically fine-tune elements such as the language used, colors, and even which celebrities and influencers appear in promotions in order to appeal to different groups of people depending on their age, their interests, or what part of the world they live in.
Generative AI-Powered Chat
CEO Mark Zuckerberg has said that one area of focus is on creating “AI personas that can help people in a variety of ways.” It’s likely that this would tie into plans to incorporate generative AI into the company’s chat technology. This would make it possible to talk to these characters via the company’s chat platforms – the largest of which are Whatsapp and Messenger – in order to interact with Meta’s various services. It could also allow businesses to implement these services into their own Facebook pages and Whatsapp channels, effectively allowing any business to offer its own automated, AI-powered customer service and feedback agents.
Image Generation
Meta’s Facebook AI division has developed its own image generation technology that it has named Instance-Conditioned Generative Adversarial Networks (IC-GAN). According to its researchers, unlike standard GAN-based image generators, it can be used to create images that are more diverse than the images contained within their training datasets.
As one of the most useful ways that generative AI will be used is predicted to be in creating synthetic data for training other machine learning algorithms, this means that it will potentially be able to create a richer set of synthetic training data from a smaller set of real-world training data. This has the potential to reduce the cost of generating, collecting, and storing data for training AI algorithms. It also has a text-to-video generative AI application called Make-A-Video, which it has said it plans to incorporate into its Reels short-form video platform in the future.
Natural Language Generation
Language-based generative AI applications such as the chat functions mentioned above are likely to eventually be powered by LLaMA – Large Language Model Meta AI - Meta’s own answer to ChatGPT and Google’s Bard.
LLaMA is deliberately designed as a smaller language model – its largest model is trained on 65 billion parameters as opposed to GPT-4’s reported one trillion parameters. The advantages of this are that it requires less compute power and resources to retrain in order to test new approaches and use cases. Smaller models are available, down to 7 billion parameters. Models such as this could conceivably run on far smaller devices than the cloud servers that are needed for ChatGPT or Bard – potentially opening the way for self-contained instances to run on personal computers or even smartphones. This could have important implications for businesses that want to use generative language models while keeping their data private.
Generative AI and the Metaverse
In late 2021 the company formerly known as Facebook rebranded itself as Meta and declared that its future lay in the metaverse. The precise meaning of this term has been much-debated, but it usually refers to a “next generation” iteration of the internet featuring more immersive environments possibly rendered in virtual reality (VR), avatars, and a shared online experience.
Since then, Meta's stock price has plummeted, it has made a wave of layoffs, and revenues across its advertising platforms have declined. Some commentators have blamed at least some of this on the company’s– and particularly Zuckerberg’s – focus on its leap into the metaverse – a concept that has, as yet, not been enthusiastically adopted by the public.
However, despite a switch of focus in recent months on AI, Meta and Zuckerberg are still, to some extent sticking to their guns. The metaverse, they claim, will be a key component of their AI vision.
Meta’s own metaverse platform, Horizons, is built around creativity and in particular, has been designed to allow users to build their own homes and environments within the VR environment. The company has strongly hinted that this is where its generative AI technology will come into its own. CTO Andrew Bosworth has said, "In the future, you might be able just to describe the world you want to create and have the large language model generate that world for you.
“And so it makes things like content creation much more accessible to more people.”
This could well mean that the company is hoping to increase the uptake of its virtual world environment by making it far simpler for users to jump in and start creating their dream metaverse home, rather than having to learn to use a complex interface to build and position 3D structures. It's likely that they are hoping that this democratizing effect will be the catalyst for more of its billion-plus customer base to make the leap from the two-dimensional pages of Facebook to the three-dimensional worlds of Horizons.
Comments