Technology category powered by

Seven generative AI innovations from AWS Summit New York 2023

Amazon has been developing AI and ML technology for more than 25 years, and recent ML innovations have made the capabilities of generative AI possible.

Seven generative AI updates were recently announced at the AWS Summit in New York.

In his keynote speech at the event, Swami Sivasubramanian, vice president of Database, Analytics, and Machine Learning at Amazon Web Services (AWS), said he expects that AWS services and capabilities will democratise the use of generative artificial intelligence (generative AI)—broadening access for all types of customers, across all lines of business—from engineering to marketing to customer service to finance and sales.

“Generative AI has captured our imaginations,” Sivasubramanian said. “This technology has reached its tipping point.”

What is generative AI? It’s a type of machine learning (ML) powered by ultra-large models, including large language models (LLMs). These models are pre-trained on a vast amount of data and are known as “foundation models” (FMs).

Generative AI will help improve experiences for customers as they interact with virtual assistants, intelligent customer contact centers, and personalised shopping services. An employee might see their productivity boosted by generative AI–powered conversational search, text summarisation, or code generation tools. Business operations will improve with intelligent document processing or quality controls built with generative AI. And customers will be able to use generative AI to turbocharge the production of all types of creative content.

Sivasubramanian underscored how all this value for generative AI will be unlocked with AWS—and how AWS customers will bring these AI-powered experiences to life.

First, model choice will be paramount. No one model will rule them all. Rather, organisations will need to be able to choose the right model for the right job. Then, customers will need to be able to securely customise these models with their own data. For example, an advertising company may want to fine-tune a model by showing it the company’s top performing ad copy, while an online retailer may want to give the model access to its inventory details so it can pull up the right information when a customer asks.

What generative AI means for businesses and how AWS can help The new AWS Generative AI Innovation Center helps customers successfully build and deploy custom generative AI products and services.

Easy-to-use tools are also a key part of democratising AI within organizations—along with the ability to deliver responses that are low cost and low latency, thanks to purpose-built ML infrastructure. Much of this innovation will be built with Amazon Bedrock, a service offered by AWS that helps organisations of any size and across all industries around the world easily build and scale their own generative AI applications. It does this by giving customers easy access to a wide range of FMs through a simple API and making it easy to leverage existing data stores to customise them.

Amazon has been developing AI and ML technology for more than 25 years, and recent ML innovations have made the capabilities of generative AI possible.

Here are seven generative AI updates announced at the AWS Summit in New York.

Firstly, AWS expands Amazon Bedrock with new model provider and additional FMsSince model choice is paramount, Amazon Bedrock is expanding to include the addition of Cohere as an FM provider, and the latest FMs from Anthropic and Stability AI.

Cohere will add its flagship text generation model, Command, as well as its multilingual text understanding model, Cohere Embed. Additionally, Anthropic has brought Claude 2, the latest version of their language model, to Amazon Bedrock, and Stability AI announced it will release the latest version of Stable Diffusion, SDXL 1.0, which produces improved image and composition detail, generating more realistic creations for films, television, music, and instructional videos. These FMs join AWS’s existing offerings on Amazon Bedrock, including models from AI21 Labs and Amazon, to help meet customers where they are on their machine learning journey, with a broad and deep set of AI and ML resources for builders of all levels of expertise.

Secondly, customers can now create agents for Amazon Bedrock to enable automation of complex tasks and deliver customised, up-to-date answers for their applications, based on their proprietary data While FMs are incredibly powerful on their own for a wide range of tasks, like summarization, they need additional programming to execute more complex requests. For example, they don’t have access to company data, like the latest inventory information, and they can’t automatically access internal APIs. Developers spend hours writing code to overcome these challenges.

With just a few clicks, agents for Amazon Bedrock will automatically break down tasks and create an orchestration plan—without any manual coding, making the task of programming generative AI applications easier for developers. For example, to service a customer request to return a pair of shoes—“I want to exchange these black shoes for a brown pair instead”—the agent securely connects to company data, automatically converts it into a machine-readable format, provides the FM with the relevant information, and then calls the right set of APIs to service this request.

In the third update, Vector engine support for Amazon OpenSearch Serverless gives customers a simpler way to leverage vectors for searchVector embeddings allow machines to understand relationships across text, images, audio, and video content in a format that’s digestible for ML—making everything from online product recommendations to smarter search results work. Now, with vector engine support for Amazon OpenSearch Serverless, developers will have a simple, scalable, and high-performing solution to build ML-augmented search experiences and generative AI applications without having to manage a vector database infrastructure.

Fourthyly, generative business intelligence (BI) in Amazon QuickSight produces business intelligence based on natural language questions, making insights more accessibleAmazon QuickSight is a unified business intelligence service that helps organisations’ employees easily find answers to questions about their data. Now, QuickSight is combining its existing ML innovations with new LLM capabilities available through Amazon Bedrock to provide generative AI capabilities—called generative BI. These capabilities will help break down siloes, making it even easier to collaborate on data across an organisation and speeding up data-driven decision making. Using everyday natural language prompts, analysts will be able to author or fine-tune dashboards, and business users will be able to share insights with compelling visuals within seconds.

Through the fifth update, AWS HealthScribe will use generative AI to ease the paperwork burden for health care professionals, giving time back for patientsUpdating electronic health records is one of the most cumbersome tasks for doctors and nurses. Clinicians will find relief when this HIPAA-eligible service empowers health care software vendors to more easily build clinical applications that leverage generative AI. HealthScribe uses speech recognition and Amazon Bedrock–powered generative AI to create transcripts and generate easy-to-review clinical notes, with built-in security and privacy features designed to protect sensitive patient data.

In the next update, New Amazon Elastic Compute Cloud (Amazon EC2) P5 instances harness NVIDIA H100 graphics processing units (GPUs) for accelerating generative AI training and inferenceThese Amazon EC2 P5 instances—now generally available—are powered by NVIDIA H100 Tensor Core GPUs, which are optimised for training LLMs and developing generative AI applications. (An “instance” in cloud lingo is virtual access to a compute resource—in this case, compute powered by H100 GPUs.) AWS is the first leading cloud provider to make NVIDIA’s highly sought-after H100 GPUs generally available in production. These instances are ideal for training and running inference for the increasingly complex LLMs and compute-intensive generative AI applications, including question answering, code generation, video and image generation, speech recognition, and more. With access to H100 GPUs, customers will be able to create their own LLMs and FMs on AWS faster than ever.

And finally, AWS offers seven free and low-cost skills training courses to help you use generative AI More than 75% of organisations plan to adopt big data, cloud computing, and AI in the next five years, according to the World Economic Forum. To help people train for the AI and ML jobs of the future, AWS released on-demand skills trainings to support those who want to understand, implement, and begin using generative AI. Amazon has designed training courses specifically for developers who want to use Amazon CodeWhisperer, engineers and data scientists who want to leverage generative AI by training and deploying FMs, executives seeking to understand how generative AI can address their business challenges, and AWS Partners helping their customers harness generative AI’s potential.

(Source: AWS)

Read More

SMARTNAV: The new proposal in logistics
Digital Operational Resilience (DORA) Conference
Cyprus among countries cooperating for returning to Moon through Artemis Accords
MoU signed for EU project on vocational education and training in AI, VR and SI
NASA welcomes Republic of Cyprus as 46th Artemis Accords signatory
CITEA: 92% of businesses plan to invest in digital transformation within the next year
Panel Discussion: Navigating AI integration in strategies and best practices
Inma Martinez: AI is useless if it isn’t fed data. Data selection and verification are a human task
Cyprus Fintech Summit 2024 is the End-of-Year Gathering that's Set to Spark Innovation & AI
Cyprus authorities remaining vigilant in the wake of a number of cyberattacks