CUSTOMISED
Expert-led training for your team
Dismiss

Harnessing Generative AI (GenAi) training course

The Generative Ai course emphasizes hands-on learning, best practices and practical strategies for deploying secure, scalable and cost-effective GenAI solutions.

JBI training course London UK

"Our tailored course provided a well rounded introduction. It covered topics that we needed to know.  The instructor genuinely cared about our learning. We felt supported from start to finish and left with knowledge that truly mattered to our work." Brian Leek, Data Analyst, May 2024

Public Courses

23/02/26 - 1 days
£2500 +VAT
18/05/26 - 1 days
£2500 +VAT
29/06/26 - 1 days
£2500 +VAT

Customised Courses

* Train a team
* Tailor content
* Flex dates
From £1200 / day
EDF logo Capita logo Sky logo NHS logo RBS logo BBC logo CISCO logo
JBI training course London UK

  • Understand core generative AI concepts and their potential applications in IT environments
  • Navigate and utilize AWS Bedrock to implement foundation models in practical scenarios
  • Apply effective prompt engineering techniques to achieve desired AI outputs
  • Develop integration patterns for incorporating generative AI into existing systems and workflows
  • Implement basic Retrieval-Augmented Generation (RAG) systems on AWS
  • Configure proper security controls and permissions for AWS generative AI services
  • Estimate and manage costs associated with generative AI implementations
  • Create a structured roadmap for generative AI adoption in their organisation
  • Identify appropriate AWS services for different generative AI use cases

Module 1: Introduction to Generative AI on AWS 

  • Understanding the foundations of generative AI and how it differs from traditional AI approaches
  • Exploring the evolution and capabilities of Large Language Models (LLMs)
  • Navigating AWS’s generative AI service ecosystem and understanding the role of each service
  • Identifying practical generative AI use cases relevant to IT departments and operations
  • Understanding the technical requirements and infrastructure considerations for AI implementation
  • Exploring the business value proposition and ROI considerations for generative AI projects

Module 2: AWS Bedrock Fundamentals 

  • Understanding AWS Bedrock as a managed service for foundation models
  • Exploring available foundation models in Bedrock (Anthropic Claude, Meta Llama, etc.)
  • Comparing model capabilities, strengths, and appropriate use cases
  • Understanding model parameters and their impact on performance and cost
  • Navigating the AWS Bedrock console and API interfaces
  • Exploring model inference options and configuration settings

 

Module 3: Hands-on Lab: First Steps with AWS Bedrock 

  • Setting up AWS Bedrock access and configuring necessary permissions
  • Exploring the AWS Bedrock console and available foundation models
  • Implementing effective prompt engineering techniques and best practices
  • Creating basic text generation applications using the Bedrock API
  • Understanding and adjusting key model parameters (temperature, top-p, tokens)
  • Building simple conversational interfaces with foundation models
  • Testing and evaluating model outputs across different scenarios

Module 4: AWS GenAI Integration Patterns 

  • Designing effective architectural patterns for generative AI integration
  • Implementing serverless AI solutions using AWS Lambda with Bedrock
  • Understanding when to use Amazon SageMaker for custom model training and deployment
  • Exploring AWS SDK integration options for different programming languages
  • Implementing security best practices for generative AI applications
  • Developing effective caching strategies to optimize performance and cost
  • Understanding API throttling, quotas, and scaling considerations

Module 5: Hands-on Lab: Building Your First AWS GenAI Solution 

  • Developing a document analysis system using AWS Bedrock and supporting services
  • Implementing Retrieval-Augmented Generation (RAG) with Amazon OpenSearch and Bedrock
  • Configuring AWS S3 for efficient document storage and retrieval
  • Setting up proper IAM roles and permissions for secure operation
  • Building API interfaces to your generative AI solution
  • Testing and troubleshooting common integration issues
  • Implementing basic monitoring and logging for your application

Module 6: Cost Management & Optimization 

  • Understanding AWS generative AI pricing models and cost components
  • Analyzing the cost implications of different foundation models and parameters
  • Implementing architectural patterns to optimize cost efficiency
  • Setting up AWS Budgets and cost alerts for generative AI workloads
  • Understanding token usage optimization techniques
  • Implementing caching strategies to reduce redundant API calls
  • Balancing cost, performance, and capability in model selection

Module 7: Implementation Planning 

  • Developing a framework for identifying high-value generative AI opportunities
  • Creating a structured 30-60-90 day implementation roadmap
  • Understanding governance considerations for responsible AI deployment
  • Exploring strategies for measuring success and demonstrating value
  • Navigating available resources for continued learning and development
  • Addressing common challenges and pitfalls in generative AI implementation
  • Open Q&A session for specific implementation questions
JBI training course London UK

  • IT professionals and cloud engineers who manage or support AWS environments.

  • Developers and software engineers building applications that use generative AI.

  • Data engineers, data scientists, and AI practitioners exploring LLMs and RAG solutions.

  • Solution architects and technical leads designing AI-driven systems on AWS.

  • Product managers and IT leaders evaluating generative AI use cases and ROI.

  • Business analysts and automation teams identifying opportunities for AI-enabled workflows.


5 star

4.8 out of 5 average

"Our tailored course provided a well rounded introduction. It covered topics that we needed to know.  The instructor genuinely cared about our learning. We felt supported from start to finish and left with knowledge that truly mattered to our work." Brian Leek, Data Analyst, May 2024



“JBI  did a great job of customizing their syllabus to suit our business  needs and also bringing our team up to speed on the current best practices. ” Brian F, Team Lead, RBS, Data Analysis Course, 20 April 2022

 

 

JBI training course London UK

Newsletter


Sign up for the JBI Training newsletter to receive technology tips directly from our instructors - Analytics, AI, ML, DevOps, Web, Backend and Security.
 



Course Description: Generative AI on AWS with Large Language Models

This course provides a comprehensive, hands-on introduction to building, integrating, and operationalizing generative AI solutions using AWS technologies. Participants will learn how Large Language Models (LLMs) work, how to leverage AWS Bedrock and related AI services, and how to implement secure, scalable, and cost-efficient generative AI applications tailored for IT operations and enterprise environments.

Through a combination of conceptual instruction, architectural walkthroughs, and practical labs, learners will gain the skills needed to evaluate generative AI opportunities, build working prototypes, integrate models into existing systems, and plan real-world implementation projects.

1. What is generative AI, and how is it different from traditional AI?

Generative AI creates new content—such as text, images, or code—based on patterns learned during training. Traditional AI typically focuses on prediction, classification, or detection. Generative models like LLMs can produce human-like outputs and support advanced reasoning tasks.

2. What is an LLM (Large Language Model)?

An LLM is a deep learning model trained on massive amounts of text data to understand and generate natural language. Examples include Anthropic Claude and Meta Llama, which are accessible through AWS Bedrock.

3. What is AWS Bedrock?

AWS Bedrock is a fully managed service that provides access to leading foundation models through a simple API. It allows you to use generative AI without managing infrastructure or training large models yourself.

4. Do I need to know machine learning to use AWS Bedrock?

No. Bedrock provides ready-to-use models that you can integrate through prompts and APIs. Basic AWS and programming experience is helpful but deep ML expertise is not required.

5. What types of models are available on AWS Bedrock?

AWS Bedrock offers multiple families of foundation models, including:

  • Anthropic Claude (text reasoning, conversation, analysis)

  • Meta Llama (general-purpose and code generation)

  • Other models for text, image generation, embedding, and more.

6. What is prompt engineering, and why is it important?

Prompt engineering is the practice of structuring inputs to guide an LLM’s output. Good prompts improve accuracy, reduce hallucinations, and optimize performance, especially for enterprise use cases.

7. Can I build custom LLM applications on AWS?

Yes. You can integrate Bedrock with Lambda, API Gateway, S3, OpenSearch, and other AWS services to build chatbots, document analysis tools, automation systems, and RAG applications.

8. What is RAG (Retrieval-Augmented Generation)?

RAG combines LLMs with external data retrieval. It allows an LLM to reference your company’s documents or databases, enabling accurate and context-aware responses using up-to-date information.

9. How does AWS ensure security for generative AI applications?

AWS provides IAM-based access control, private networking options, encryption, audit logging, and secure API endpoints. You retain full control over your data and permissions when using Bedrock.

10. How do costs work when using generative AI on AWS?

Costs are typically based on model usage—measured in tokens processed—and the specific model chosen. Optimizing prompts, selecting efficient models, and using caching strategies can help reduce costs.

11. Do I need to train my own model to use generative AI on AWS?

No. AWS Bedrock provides pre-trained foundation models. If needed, Amazon SageMaker can be used for fine-tuning or training custom models, but it's optional.

12. What programming languages can I use to integrate Bedrock?

You can use any AWS-supported SDK, including Python, JavaScript/TypeScript, Java, Go, and others, to call Bedrock APIs and build applications.

CONTACT
+44 (0)20 8446 7555

[email protected]

SHARE

 

Copyright © 2025 JBI Training. All Rights Reserved.
JB International Training Ltd  -  Company Registration Number: 08458005
Registered Address: Wohl Enterprise Hub, 2B Redbourne Avenue, London, N3 2BS

Modern Slavery Statement & Corporate Policies | Terms & Conditions | Contact Us

POPULAR

AI training courses                                                                        CoPilot training course

Threat modelling training course   Python for data analysts training course

Power BI training course                                   Machine Learning training course

Spring Boot Microservices training course              Terraform training course

Data Storytelling training course                                               C++ training course

Power Automate training course                               Clean Code training course