Fine-Tuning Foundation Models on Amazon Bedrock
05th November 2025
By over-training a large foundation model, we can obtain impressive improvements in the accuracy, relevance of the actual model, and the relevance of the actual domain to the application. Organizations and individuals can build powerful models, which include Titan, Nova, etc at scale without the need to build specialized infrastructure using Amazon Bedrock.
We will walk through an effective, comprehensive method of optimizing a foundational model on Amazon Bedrock in this blog. You will get to know how to configure an IAM and S3 appropriately, prepare data, create fine-tuning jobs and then access your own model with on-demand or provisioned inference.
Fig: S3 bucket for training data and output artifact
A service role that Bedrock can assume is required, with a trust policy for
bedrock.amazonaws.com and permissions to access your S3 buckets. The policy typically looks
like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:ListBucket"],
"Resource": [
"arn:aws:s3:::${training-bucket-name}",
"arn:aws:s3:::${training-bucket-name}/*"
]
},
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:PutObject", "s3:ListBucket"],
"Resource": [
"arn:aws:s3:::${output-bucket}",
"arn:aws:s3:::${output-bucket}/*"
]
}
]
}
Fig: Service role for bedrock job
Before touching data, clarify:
By default, the standard pattern is JSON Lines (.jsonl). The exact schema depends on the
model type. For example, the Nova model requires a specific conversation format:
{
"schemaVersion": "bedrock-conversation-2024",
"system": [{"text": "You are a fitness and health analysis assistant."}],
"messages": [
{"role": "user", "content": [{"text": "Analyze: Male 30y, 70kg, 1.75m, BMI 22.86..."}]},
{"role": "assistant", "content": [{"text": "Healthy BMI; moderate intensity..."}]}
]
}
Fig: Training data set
Upload your train.jsonl and optional validation.jsonl to your S3 bucket.
Ensure the bucket uses appropriate encryption and your Bedrock role has the necessary access.
Navigate to Tune → Custom models in the Bedrock console and choose Create Fine-tuning job.
Fig: Bedrock fine-tune job
Configure your hyperparameters such as epochCount, batchSize, and
learningRate.
Fig: Hyperparameters for fine-tuning
Specify the S3 locations for your input data and output artifacts.
Fig: Input and output data location
When the job finishes successfully, register your custom model. You can use On-Demand inference (recommended for testing) or Provisioned Throughput.
Fig: On-Demand inference
Open the Playground and select your custom model to verify the improvements.
Fig: Testing the custom-trained model
Fine-tuning is iterative. Use validation loss and task-specific metrics to judge performance. You can also run model evaluation jobs in Bedrock to systematically compare your custom model against the base model.
Fine-tuning on Amazon Bedrock enables teams to have a potent method of opening domain-based AI without creating infrastructure on their own. Through a step-by-step workflow to design the use case, clean and coerce data in the form of JSONL, setup and configure IAM and S3 appropriately and tweak hyperparameters you can safely and successfully develop custom models that behave in the way you require.