Brian Sletten is a liberal arts-educated software engineer with a focus on forward-leaning technologies. His experience has spanned many industries including retail, banking, online games, defense, finance, hospitality and health care. He has a B.S. in Computer Science from the College of William and Mary and lives in Auburn, CA. He focuses on web architecture, resource-oriented computing, social networking, the Semantic Web, AI/ML, data science, 3D graphics, visualization, scalable systems, security consulting and other technologies of the late 20th and early 21st Centuries. He is also a rabid reader, devoted foodie and has excellent taste in music. If pressed, he might tell you about his International Pop Recording career.
Somewhere between the positions of “AI is going to change everything” and “AI is currently an overhyped means of propping up silicon valley unicorn valuations” lives a useful reality: AI research is producing tools that can be exploited safely, meaningfully, and responsibly. They can save you money, speed up delivery, and create
new opportunities that might not otherwise exist. The trick is understanding what they can do well and what is a big, red flag.
In this talk I will lay out a framework for considering a range of technologies that fall under the umbrella of AI and highlight the costs, benefits, and risks to help you make better choices about what to pursue and what to avoid.
If you are getting tired of the appearance of new types of databases… too bad. We are increasingly relying on a variety of data storage and retrieval systems for specific purposes. Data does not have a single shape and indexing strategies that work for one are not necessarily good fits for others. So after hierarchical, relational, object, graph, columnoriented, document, temporal, appendonly, and everything else, get ready for Vector Databases to assist in the systematization of machine learning systems.
This will be an overview of the benefits of vectors databases as well as an introduction to the major players.
We will focus on open source versus commercial players, hosted versus local deployments, and the attempts to add vector search capabilities to existing storage systems.
We will cover:
If you are getting tired of the appearance of new types of databases… too bad. We are increasingly relying on a variety of data storage and retrieval systems for specific purposes. Data does not have a single shape and indexing strategies that work for one are not necessarily good fits for others. So after hierarchical, relational, object, graph, columnoriented, document, temporal, appendonly, and everything else, get ready for Vector Databases to assist in the systematization of machine learning systems.
This will be an overview of the benefits of vectors databases as well as an introduction to the major players.
We will focus on open source versus commercial players, hosted versus local deployments, and the attempts to add vector search capabilities to existing storage systems.
We will cover:
There is plenty of discussion about how machine learning will be applied to cybersecurity initiatives, but there is precious little conversation about the actual vulnerabilities of these systems themselves. Fortunately, there are a handful of research groups doing the work to assess the threats we face in systematizing datadriven systems. In this session, I will introduce to the main concerns and how you can start to think about protecting against them.
We will mostly focus on the research findings of the Berryville Institute of Machine Learning. They have conducted a survey of the literature and have identified a taxonomy of the most common kinds of attacks including:
This will be a securityfocused discussion. Only basic understanding of machine learning will be required.
Large Language Models (LLMs) such as ChatGPT and Llama have impressed us with what they can do. They have also horrified us with what they actually do when they are employed with no protection: hallucinations, stale knowledge bases, no conceptual basis for reasoning, and a capacity for toxic and inappropriate content generation. Rather than avoid them altogether or risk legal liability or brand damage, we can put some guardrails around them to benefit from their best traits without fearing their worst.
Retrieval Augmented Generation (RAG) systems augment the process to make it behave more to our liking. Come hear what you can do to benefit from AI systems without fearing them.
We will cover examples using LangChain and LlamaIndex, two open source frameworks for working with LLMs and creating RAG infrastructure.
We will cover:
Introduction to LLMs
Risks and Limitations
Basic RAG Systems
Embeddings
Vector Databases
Prompt Engineering
Testing and Validating LLMs and RAG Systems
Advanced Techniques
AI as Judge
Large Language Models (LLMs) such as ChatGPT and Llama have impressed us with what they can do. They have also horrified us with what they actually do when they are employed with no protection: hallucinations, stale knowledge bases, no conceptual basis for reasoning, and a capacity for toxic and inappropriate content generation. Rather than avoid them altogether or risk legal liability or brand damage, we can put some guardrails around them to benefit from their best traits without fearing their worst.
Retrieval Augmented Generation (RAG) systems augment the process to make it behave more to our liking. Come hear what you can do to benefit from AI systems without fearing them.
We will cover examples using LangChain and LlamaIndex, two open source frameworks for working with LLMs and creating RAG infrastructure.
We will cover:
Introduction to LLMs
Risks and Limitations
Basic RAG Systems
Embeddings
Vector Databases
Prompt Engineering
Testing and Validating LLMs and RAG Systems
Advanced Techniques
AI as Judge
We have seen how Retrieval Augmented Generation (RAG) systems can help prop up Large Language Models (LLMs) to avoid some of their worst tendencies. But that is just the beginning. The cutting edge stateoftheart systems are Multimodal and Agentic, involving additional models, tools, and reusable agents to break problems down in separate pieces, transform and aggregate the results, and validate the results before returning them to the user.
Come get introduced to some of the latest and greatest techniques for maximizing the value of your LLMbased systems while minimizing the risk.
We will cover:
We have seen how Retrieval Augmented Generation (RAG) systems can help prop up Large Language Models (LLMs) to avoid some of their worst tendencies. But that is just the beginning. The cutting edge stateoftheart systems are Multimodal and Agentic, involving additional models, tools, and reusable agents to break problems down in separate pieces, transform and aggregate the results, and validate the results before returning them to the user.
Come get introduced to some of the latest and greatest techniques for maximizing the value of your LLMbased systems while minimizing the risk.
We will cover: