In this post, we’ll walk you through the process of building an AI-powered web application using FastAPI and then quickly dockerizing it using Docker's
docker init
feature. This streamlined approach will help you get your app up and running in a containerized environment with minimal hassle.Using Docker Init for FastAPI and Docker Integration
What is Docker init?
Docker init is a command-line tool that simplifies the Docker setup process by generating Docker configuration files based on your application’s requirements. It asks a series of questions to auto-detect your application platform and setup needs, helping you create Dockerfiles and Docker Compose configurations quickly and accurately.
Why Use Docker init?
- Simplified Setup: Docker init automates the creation of Dockerfiles and Docker Compose configurations, reducing manual configuration errors.
- Best Practices: It incorporates best practices for your chosen application platform and environment settings, ensuring optimized Docker images.
- Time-Saving: Ideal for developers who want a fast and reliable way to containerize their applications without delving into complex Docker configurations.
Now, let’s dive into how to use Docker init to containerize a FastAPI application with an AI model.
Step-by-Step Guide
Step 1: Create a Project Directory
Start by creating a new directory for your project:
mkdir fastapi-ml-docker cd fastapi-ml-docker
Step 2: Set Up a Virtual Environment
Setting up a virtual environment helps you manage dependencies separately from other projects. Run the following commands:
python -m venv venv source venv/bin/activate # On Windows, use `venv\\Scripts\\activate`
Step 3: Install Necessary Packages
Install FastAPI, Uvicorn, and the libraries needed for your AI model:
pip install fastapi uvicorn transformers torch pillow
Step 4: Create Application Files
Create your FastAPI application and model files following the given directory structure.
Directory Structure
fastapi-ml-docker/ ├── app/ │ ├── __init__.py │ ├── main.py │ └── model.py ├── requirements.txt
app/__init__.py
This file can be left empty. It’s used to mark the directory as a Python package.
app/main.py
This file will handle the API requests and responses:
# app/main.py from fastapi import FastAPI, UploadFile, File from fastapi.responses import JSONResponse from PIL import Image import io from app.model import get_prediction app = FastAPI() @app.post("/predict") async def predict(image: UploadFile = File(...), question: str = "How many cats are there?"): try: image_data = await image.read() image = Image.open(io.BytesIO(image_data)) predicted_answer = get_prediction(image, question) return JSONResponse(content={"predicted_answer": predicted_answer}) except Exception as e: return JSONResponse(content={"error": str(e)}, status_code=500) @app.get("/") async def root(): return {"message": "Welcome to the FastAPI ML Docker app!"}
app/model.py
This file contains the code to load your model and make predictions:
# app/model.py from transformers import ViltProcessor, ViltForQuestionAnswering import torch processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-vqa") model = ViltForQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa") def get_prediction(image, question): # Prepare inputs encoding = processor(image, question, return_tensors="pt") # Forward pass outputs = model(**encoding) logits = outputs.logits idx = logits.argmax(-1).item() predicted_answer = model.config.id2label[idx] return predicted_answer
Step 5: Create Requirements File
List your dependencies in a
requirements.txt
file:fastapi uvicorn transformers torch pillow
Step 6: Using Docker Init
1. Initialize Docker in Your Project Directory
Run the following command in your project directory:
docker init
2. Answer the Prompts
Docker will ask a series of questions about your application. Here are the suggested answers for a FastAPI app:
- Application Platform: Python
- Python Version: 3.10
- Port: 8000
- Command to Run Your App:
uvicorn app.main:app --host 0.0.0.0 --port 8000
3. Review the Generated Files
Docker will create several files, including
.dockerignore
, Dockerfile
, compose.yaml
, and README.Docker.md
. Here’s what they generally look like:Dockerfile
# syntax=docker/dockerfile:1 # Comments are provided throughout this file to help you get started. # If you need more help, visit the Dockerfile reference guide at # https://docs.docker.com/go/dockerfile-reference/ # Want to help us make this template better? Share your feedback here: https://forms.gle/ybq9Krt8jtBL3iCk7 ARG PYTHON_VERSION=3.10.9 FROM python:${PYTHON_VERSION}-slim as base # Prevents Python from writing pyc files. ENV PYTHONDONTWRITEBYTECODE=1 # Keeps Python from buffering stdout and stderr to avoid situations where # the application crashes without emitting any logs due to buffering. ENV PYTHONUNBUFFERED=1 WORKDIR /app # Create a non-privileged user that the app will run under. # See https://docs.docker.com/go/dockerfile-user-best-practices/ # ARG UID=10001 # RUN adduser \ # --disabled-password \ # --gecos "" \ # --home "/nonexistent" \ # --shell "/sbin/nologin" \ # --no-create-home \ # --uid "${UID}" \ # appuser # Download dependencies as a separate step to take advantage of Docker's caching. # Leverage a cache mount to /root/.cache/pip to speed up subsequent builds. # Leverage a bind mount to requirements.txt to avoid having to copy them into # into this layer. RUN --mount=type=cache,target=/root/.cache/pip \ --mount=type=bind,source=requirements.txt,target=requirements.txt \ python -m pip install -r requirements.txt # Switch to the non-privileged user to run the application. # USER appuser # Copy the source code into the container. COPY . . # Expose the port that the application listens on. EXPOSE 8000 # Run the application. CMD uvicorn app.main:app --host 0.0.0.0 --port 8000
compose.yaml
# syntax=docker/dockerfile:1 # Comments are provided throughout this file to help you get started. # If you need more help, visit the Dockerfile reference guide at # https://docs.docker.com/go/dockerfile-reference/ # Want to help us make this template better? Share your feedback here: https://forms.gle/ybq9Krt8jtBL3iCk7 ARG PYTHON_VERSION=3.10.9 FROM python:${PYTHON_VERSION}-slim as base # Prevents Python from writing pyc files. ENV PYTHONDONTWRITEBYTECODE=1 # Keeps Python from buffering stdout and stderr to avoid situations where # the application crashes without emitting any logs due to buffering. ENV PYTHONUNBUFFERED=1 WORKDIR /app # Create a non-privileged user that the app will run under. # See https://docs.docker.com/go/dockerfile-user-best-practices/ # ARG UID=10001 # RUN adduser \ # --disabled-password \ # --gecos "" \ # --home "/nonexistent" \ # --shell "/sbin/nologin" \ # --no-create-home \ # --uid "${UID}" \ # appuser # Download dependencies as a separate step to take advantage of Docker's caching. # Leverage a cache mount to /root/.cache/pip to speed up subsequent builds. # Leverage a bind mount to requirements.txt to avoid having to copy them into # into this layer. RUN --mount=type=cache,target=/root/.cache/pip \ --mount=type=bind,source=requirements.txt,target=requirements.txt \ python -m pip install -r requirements.txt # Switch to the non-privileged user to run the application. # USER appuser # Copy the source code into the container. COPY . . # Expose the port that the application listens on. EXPOSE 8000 # Run the application. CMD uvicorn app.main:app --host 0.0.0.0 --port 8000
.dockerignore
# Include any files or directories that you don't want to be copied to your # container here (e.g., local build artifacts, temporary files, etc.). # # For more help, visit the .dockerignore file reference guide at # https://docs.docker.com/go/build-context-dockerignore/ **/.DS_Store **/__pycache__ **/.venv **/.classpath **/.dockerignore **/.env **/.git **/.gitignore **/.project **/.settings **/.toolstarget **/.vs **/.vscode **/*.*proj.user **/*.dbmdl **/*.jfm **/bin **/charts **/docker-compose* **/compose* **/Dockerfile* **/node_modules **/npm-debug.log **/obj **/secrets.dev.yaml **/values.dev.yaml LICENSE README.md
Step 7: Build and Run Your Docker Container
After initializing Docker, build and run your container with the following commands:
docker compose up --build
Your application will be available at http://localhost:8000.
Conclusion
You’ve successfully dockerized your FastAPI application with an AI model using Docker's
docker init
, making the process quick and easy. This setup ensures your application is portable and can be deployed consistently across various environments. Check out the video tutorial and the GitHub repository for the complete code and further instructions. Happy coding!Resources
Feel free to practice by yourself and contribute to the repository if you have any improvements or additions. Enjoy your journey in building AI-powered applications!
This blog post guides you through creating and dockerizing a FastAPI app with an AI model, demonstrating the power and simplicity of using Docker init to manage your applications.