Load Balancing in NGINX
Whether you are migrating from hardware to software load balancers, moving to the cloud, or building the next killer app, load balancing across multiple application instances is key to optimizing resource utilization, maximizing throughput, reducing latency, and ensuring fault-tolerant configurations. NGINX is a very efficient load balancer in all kinds of deployment scenarios. In this course, you'll start with a review of the available load balancing methods. The course also explains how to implement session persistence in NGINX Plus with sticky cookies, sticky learn, and sticky routes, and provides examples of load balancing different upstream services, including Tomcat.
The course is self-paced and is made up of slides with text, screenshots of example configurations, audio narration, and two video demonstrations. You may proceed through this course one slide at a time or you may skip around. You may also refer to it multiple times and take it as often as you like.
In this course, you will learn:
Load balancing techniques, including least connection method and IP hash method
How to handle server timeouts
Session affinity methods: cookies, sticky routes, and sticky learn
Use dynamic configuration to view, modify, and remove servers from a server group at runtime
How NGINX Plus makes use of shared memory segment
You are familiar with NGINX but particularly interested in learning load balancing methods in NGINX.
Approximately 30 minutes