AI Infrastructure Alliance
timezone
+00:00 GMT
Featured
# llmsandgenairevolution2023

Daniel Jeffries – AIIA – The Power of Agents and How to Build Them Right

Daniel Jeffries
Daniel Jeffries

Collections
See all
LLMs and the Generative AI Revolution 2023
25 Items
AI at Scale 2023
22 Items
Data-Centric AI Summit 2022
34 Items
Popular topics
# datacentricaisummit2022
# llmsandgenairevolution2023
# aiatscale2023
# day2summit
# monitor
# observe
# explain
# 3March2022
# 7April2022
Latest
Popular
All
Manojkumar Parmar
Yuvaraj Govindarajulu
Manojkumar Parmar & Yuvaraj Govindarajulu · Sep 14th, 2023

AIShield – Navigating the Generative AI Revolution: Ensuring Compliance and Security in the Era of LLMs with Guardrails

In the era of ChatGPT-like technology, businesses are embracing the power of Language Model (LLM) applications to enhance customer experiences, automate processes, and drive innovation. However, using these tools can also pose hidden dangers and risks, including ethical concerns, data breaches, and IP infringement. There is a need for robust safeguards to protect businesses from potential legal and policy challenges, as well as mitigate associated risks. This captivating and insightful talk will delve into the challenges and risks associated with LLM adoption and unveil AIShield.GuArdIan, a cutting-edge solution tailored specifically for businesses utilizing ChatGPT-like technology, offering a vital safeguard to ensure compliance and minimize vulnerabilities. AIShield.GuArdIan offers comprehensive application security controls at the input and output stages, ensuring legal compliance, policy adherence, and risk mitigation. By filtering content, enforcing user-based policy controls, and protecting sensitive information, AIShield.GuArdIan creates a secure environment for LLM interactions safeguarding businesses from legal, ethical, and reputational risks. With AIShield.GuArdIan, businesses gain granular control over LLM applications, balancing innovation, and compliance. Join this captivating talk to gain a profound understanding of the challenges and risks surrounding the adoption of LLM technology and explore how AIShield.GuArdIan can serve as a critical safety net. Discover how its comprehensive application security controls enable businesses to navigate the dynamic AI landscape confidently, ensuring AI safety, data privacy, and ethical AI practices. By embracing AIShield.GuArdIan, businesses can construct robust guardrails for LLMs, embracing the bold and fearless AI era with confidence and security.
# llmsandgenairevolution2023
Fabiana Clemente
Fabiana Clemente · Sep 12th, 2023

Fabiana Clemente – YData – Data-Centric AI in the era of LLM - data as an unfair advantage

In today's rapidly evolving digital landscape, the emergence of LLMs, has revolutionized the field of artificial intelligence. These models have demonstrated remarkable capabilities in natural language processing, understanding, and generation. However, beneath the impressive performance lies a fundamental truth: data is the lifeblood of AI. This speaking slot aims to explore the concept of data-centric AI in the context of the LLM era. It sheds light on the notion that data has become an unfair advantage in the AI landscape, and delves into the reasons behind this advantage and its implications for various stakeholders, including individuals and organizations. Throughout the session, we examine the key factors that contribute to data's unfair advantage. We discuss how large-scale datasets have fueled the development of LLMs, enabling them to acquire an astonishingly broad understanding of human language and context. We also explore the challenges faced by those with limited access to quality data, underscoring the potential biases and inequalities that arise as a result. We will analyze the implications of data monopolies, the changes required to the current process of data governance, privacy concerns, and potential consequences for marginalized communities. Additionally, we will explore strategies to address these challenges, including data sharing initiatives, transparency frameworks, and regulatory interventions.
# llmsandgenairevolution2023
The AI revolution spells crisis and costs for delivering business value. When companies overlook model reproducibility and transparency, the ability to trust what AI creates is put at risk. In our talk, we address automating complex data pipelines for improving AI/ML model reliability, best practices for data versioning, reproducibility, and workflows for AI/ML, practical insights and actionable strategies for developing scalable AI/ML applications, and the latest tooling considerations for versioning unstructured and structured data.
# llmsandgenairevolution2023
Join us for an introduction to NSQL, a new family of open-source foundation models automating SQL generation tasks. This talk will discuss the limitations of existing open and closed-source foundation models for enterprise use, including issues of customization, quality, and privacy. We will highlight how NSQL addresses these challenges with its open-source nature, specialized training for SQL tasks, and a range of model sizes to accommodate diverse hardware configurations. Included in the talk will be NSQL's data generation process and training approach, underlining its advantages over other foundation models for SQL generation. We will demonstrate how the NSQL models outperform existing open source models for SQL generation and, by starting from the newest LLama2 commercially available model, we even beat closed source models.
# llmsandgenairevolution2023
LLMs offer a novel alternative to data pipelines by generating complex code from simple text prompts, enabling non-technical users to build AI data pipelines independently. The traditional approach to designing pipeline workflows often requires knowledge of programming languages, which can be challenging and discouraging for non-engineers. Explore this concept with us as we walk through the future of an LLM powered platform that empowers users to construct intricate data pipelines without developer involvement!
# llmsandgenairevolution2023
We will cover challenges of going from model to a scalable endpoint. At Mystic we have built a fully managed enterprise-grade platform designed to deploy ML models at scale, with high-throughput, and consistent performance across your preferred compute environment.
# llmsandgenairevolution2023
One of the emerging challenges of applying generative AI to real-world problems is that they're prone to hallucination. While the latest generative AI models can produce highly compelling demos without much effort, creating reliable features where accuracy matters can be a challenge. This talk will discuss some approaches to building robust, AI-powered features that you can trust to solve real-world problems. I'll also share our experience at Anzen from applying these techniques to two use-cases: AI-powered underwriting and analyzing employee agreements for compliance insights.
# llmsandgenairevolution2023
Serving fine-tuned Large Language Models at scale poses significant challenges in terms of quality, computational resources and cost efficiency. This talk will demonstrate a combination of techniques used to solve these challenges and their tradeoffs.
# llmsandgenairevolution2023
Max Cembalest
Rowan Cheung
Max Cembalest & Rowan Cheung · Sep 12th, 2023

Max Cembalest, Rowan Cheung – Arthur – LLMs for Evaluating LLMs

As LLMs increase in popularity in natural language applications, proper testing will be more and more needed to determine which models are best suited for which purposes. Classical metrics, benchmarks, and datasets from the pre-LLM decade of NLP can be rather limited, which has led many practitioners to use new LLMs for evaluating other new LLMs. We will cover emerging best practices in LLM evaluation, such as recognizing when LLMs work better than classical metrics, and when LLMs can create useful testing datasets that are more relevant than existing benchmark datasets. We will also cover emerging risks and explore when LLMs are biased in the way they respond when prompted to judge other models, such as: is an LLM biased to prefer its own output over another model’s?
# llmsandgenairevolution2023
GPT-4 models solve advanced general language and multimodal tasks, but they are generic, complex, and costly to train. Domain-specific data is not suited for general models. What if you could use your data to produce even better models for your specific use cases? We'll demonstrate LLM and multimodal models, how fine-tuning and retraining these models from scratch with your data can significantly improve their performance, and how HPE Machine Learning Development Environment is the ideal platform to optimize models on your infrastructure.
# llmsandgenairevolution2023
Popular