Generative AI landscape What is generative AI and what are its by Przemek Chojecki Data Science Rush
Among the most popular generative models are Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and Autoregressive Models. In addition, generative AI has many applications, such as music, art, gaming and healthcare, that make it more attractive to the broader population. A hallmark of the last few years has been the rise of the “Modern Data Stack” (MDS). Part architecture, part de facto marketing alliance amongst vendors, the MDS is a series of modern, cloud-based tools to collect, store, transform and analyze data. Before the data warehouse, there are various tools (Fivetran, Matillion, Airbyte, Meltano, etc.) to extract data from their original sources and dump it into the data warehouse. At the warehouse level, there are other tools to transform data, the “T” in what used to be known as ETL (extract transform load) and has been reversed to ELT (here, dbt Labs reigns largely supreme).
The platform layer of generative AI focuses on providing access to large language models (LLMs) through a managed service. This service simplifies the fine-tuning and customization of general-purpose, pre-trained foundation models like OpenAI’s GPT. People.ai is an AI platform that aims to revolutionize sales performance by automating sales workflows and providing insights into sales activities.
ChatGPT, in comparison, is presently using data that will expire in September 2021. As generative AI continues to evolve, its applications across various industries will expand, unlocking new opportunities for Yakov Livshits automation, creativity, and enhanced customer experiences. The competitive landscape will witness fierce competition among tech giants and startups, driving further innovation and advancements in the field.
Closed source models generate revenue by charging customers for API usage or subscription-based access. The next iteration of Jurassic (Jurassic-2) is a highly customizable language model. It has comprehensive instruction tuning on proprietary data, which gives it advanced instruction following capabilities.
Proprietary or Closed Source Foundation Models (OpenAI, Google Bart) Pre-trained Models (connecting with APIs)
In addition to personalized investment advice and fraud prevention, virtual financial advisors powered by natural language processing are also becoming more common. These chatbots can answer customer questions about finances in real-time using machine learning algorithms to understand the natural language queries. As this technology continues to advance, we can expect even more personalized and efficient financial services for customers in the future. For example, Gen-AI can be used to create new content, such as music or images, which can be used for a variety of purposes such as providing the creatives with more flexibility and imagination. It can also be used to improve machine learning algorithms by generating new training data.
The other categories — the boxes on our landscape that are relatively sparse now — I don’t think they’re gonna be sparse for long. When I first put out our blog post about how the tech and how the different pieces of technology are becoming ready, I thought 3D, video, bio, they were going to take longer based on some conversations. Basically everyone wrote in to me like, “You’re wrong, this stuff is happening way faster than you think it is.” And they were right. I think similar to what you saw with text and image happen where the models were a couple years back, I think you’ll start to see the application space start to flourish for these other modalities as well.
Marketing’s Generative AI Future
Founder of the DevEducation project
A prolific businessman and investor, and the founder of several large companies in Israel, the USA and the UAE, Yakov’s corporation comprises over 2,000 employees all over the world. He graduated from the University of Oxford in the UK and Technion in Israel, before moving on to study complex systems science at NECSI in the USA. Yakov has a Masters in Software Development.
We are particularly excited about how gen AI is being used in this space to simplify real-time patient data interpretation, streamline disparate data sources, and significantly improve clinician productivity. Over the past few years, we have witnessed a gradual yet definitive progression in artificial intelligence (AI) capabilities, now stretching beyond image recognition to natural language understanding with generative AI technologies. Simply stated, ChatGPT leverages an underlying machine learning model to perform natural language processing (NLP). A massive amount of intriguing business use cases result from the use of generative AI tools. The potential size of this market is hard to grasp — somewhere between all software and all human endeavors — so we expect many, many players and healthy competition at all levels of the stack. We also expect both horizontal and vertical companies to succeed, with the best approach dictated by end-markets and end-users.
- Additionally, learning the command input to achieve the desired output may take some time.
- Doing this has allowed us to help hundreds of companies to transform their business and save millions.
- The industry is grappling with a stream of events that have created massive supply chain disruptions that have resulted in long-lasting effects on organizations, the economy and the environment.
- Larger enterprises and those that desire greater analysis or use of their own enterprise data with higher levels of security and IP and privacy protections will need to invest in a range of custom services.
- This has not helped MAD companies much, as the overwhelming majority of companies on the landscape are B2B vendors.
The vast majority of the organizations appearing on the MAD landscape are unique companies with a very large number of VC-backed startups. A number of others are products (such as products offered by cloud vendors) or open source projects. On the other hand, Replicate is a versatile Model Hub that enables developers to share, discover, and reproduce machine learning projects across various domains.
However, the skills required to develop Generative AI-powered solutions are scarce and expensive. Many traditional businesses face challenges recruiting these profiles who are in demand at technology companies. Many point solutions boast models with very high performance in their specific area. In many cases, they can also provide rapid time to value, as they are nearly ready to use. As a leading AI services provider, Wizeline intends to promote collaboration and knowledge sharing by continuously improving our map. Through this frequently updated resource, we empower you to navigate the complex landscape of generative AI, understand regulatory guidelines, stay ahead of the competition, and unlock the transformative power of these new technologies.
As the attention on Generative AI increases, ever more startups will develop AI-powered solutions solving specific problems in the organization. For example, AI-powered email generation for sales development representatives, AI-powered contract review for purchasing, etc. The Generative AI application landscape will surely continue to grow in the coming months and years.
Hardware and Cloud Platforms
Users can generate images by typing the /imagine command followed by the prompt, and the bot generates four images, from which the user selects the image they want to upscale. Nvidia’s H100 Tensor Core, their ninth-generation data center GPU, contains 80 billion transistors and is optimized for large-scale AI and high-performance computing (HPC) models. Yakov Livshits The A100, Nvidia’s predecessor to the H100, is one of the best GPUs for deep learning. In 2019, OpenAI released GPT-2, a model that could generate realistic human-like text in entire paragraphs with internal consistency, unlike any of the previous models. The next generation, GPT-3, launched in 2020, was trained with 175 billion parameters.
Leveraging open-source foundation models brings several advantages, including high accuracy, the ability to generate high-quality content, scalability to large user bases, and transparency. This transparency allows users to comprehend the workings of these models and make necessary improvements. However, challenges exist, including their complexity, potential bias in the training data, and the risk of misuse for generating harmful content, such as hate speech or misinformation. The benefits of using closed-source foundation models are their high accuracy, the production of high-quality content, scalability to meet the needs of many users and security against unauthorized access. For example, their development and maintenance can be costly, and there can be bias based on the training data. Additionally, there is potential for misuse, such as generating harmful content like hate speech or misinformation.