elastic beanstalk what url to use web socket

This post was written by Robert Zhu, Principal Developer Abet at AWS.

This commodity continues a blog I posted earlier about using Load Balancers on Amazon Lightsail. In this article, I demonstrate a few common challenges and solutions when combining stateful applications with load balancers. I outset with a simple WebSocket application in Amazon Lightsail that counts the number of seconds the client has been connected. Then, I add a Lightsail Load Balancer, and show you how the application performs routing and retries. Let's become started.

WebSockets

WebSockets are persistent, duplex sockets that enable bi-directional communication between a client and server. Applications often employ WebSockets to provide real-time functionality such every bit chat and gaming. Let's start with some sample lawmaking for a unproblematic WebSocket server:

          const WebSocket = require("ws"); const name = require("./randomName"); const server = require("http").createServer(); const express = require("express"); const app = express();  console.log(`This server is named: ${proper name}`);  // serve files from the public directory server.on("request", app.use(express.static("public")));  // tell the WebSocket server to apply the same HTTP server const wss = new WebSocket.Server({   server, });  wss.on("connection", function connexion(ws, req) {   const clientId = req.url.replace("/?id=", "");   console.log(`Customer connected with ID: ${clientId}`);    let n = 0;   const interval = setInterval(() => {     ws.send(`${name}: you have been connected for ${northward++} seconds`);   }, 1000);    ws.on("close", () => {     clearInterval(interval);   }); });  const port = process.env.PORT || fourscore; server.heed(port, () => {   console.log(`Server listening on port ${port}`); });                  

We serve static files from the public directory and WebSocket connection requests on the aforementioned port. An incoming HTTP request from a browser loadspublic/index.html, and a WebSocket connectedness initiated from the customer triggers thewss.on("connection", …) lawmaking. Upon receiving a WebSocket connection, I set up a recurring callback where I tell the client how long it has been connected. At present, allow's look at a expect at the client code:

          buttonConnect.onclick = async () => {   const serverAddress = inputServerAddress.value;   letters.innerHTML = "";   instructions.parentElement.removeChild(instructions);    appendMessage(`Connecting to ${serverAddress}`);    endeavour {     allow retries = 0;     while (retries < 50) {       appendMessage(`establishing connection... retry #${retries}`);       await runSession(serverAddress);       expect sleep(1500);       retries++;     }      appendMessage("Reached maximum retries, giving up.");   } take hold of (e) {     appendMessage(e.message || e);   } };  async role runSession(address) {   const ws = new WebSocket(address);    ws.addEventListener("open up", () => {     appendMessage("connected to server");   });    ws.addEventListener("bulletin", ({ data }) => {     panel.log(data);     appendMessage(data);   });    return new Promise((resolve) => {     ws.addEventListener("shut", () => {       appendMessage("Connection lost with server.");       resolve();     });   }); }                  

I use the WebSocket DOM API to connect to the server. One time connected, I append any received messages to console and on screen via the customappendMessage office. If the client loses connectivity, information technology will try to reconnect upwards to fifty times. Permit's run it:

"slimy-cardinal" is a randomly generated server name

At present, suppose I am running a very demanding real-time application, and need to scale the server chapters across a single host. How would I exercise this? I create two Ubuntu xviii.04 instances. Once the instances are up, I SSH to each i, and run the following commands:

          sudo apt-get update sudo apt-become -y install nodejs npm git clone https://github.com/robzhu/ws-fourth dimension cd ws-time && npm install node server.js                  

During installation, select Yes when presented with the prompt:

Select "Yes" when prompted to install libssl for npm

Go along these SSH sessions open up, y'all need them shortly. Next, you create the Load Balancer in Amazon Lightsail, and attach the instances:

screenshot of target instances for load balancers

Annotation: the Lightsail load balancer simply works for port eighty, which is part of the reason I use the same port for HTTP and WebSocket requests.

Copy the DNS name for the load balancer, open up it in a new browser tab, and paste it into the WebSocket server address with the format:

ws://<DNSName>

screenshot of what the correct server address should look like

Make sure the server address does not accidentally start with "ws://http://…"

Adjacent, locate the SSH session that accustomed the connection. It looks like this:

accepted ssh section

The server logs the client ID when it receives a connection.

If y'all kill this process, the client disconnects and runs its retry logic, hopefully causing the load balancer to route the client to a healthy node. Next, hitting connect from the customer. Subsequently a few seconds, kill the procedure on the server, and you should see the client reconnect to a healthy example:

what you should see when you connect to a healthy instance

The client retry was routed to a good for you example on the first endeavour. This is due to the round-robin algorithm that the Lightsail load balancer uses. In production, you should not wait the load balancer to detect an unhealthy node immediately. If the load balancer continues to route incoming connections to an unhealthy node, the client volition demand more than retry attempts before reconnecting. If this is a big scale organization, we will want to implement an exponential backoff on the retry intervals to avoid overwhelming other nodes in the cluster (aka the thundering herd problem).

Detect that the message "yous have been connected for 10 seconds" reset X to 0 subsequently the client reconnected. What if you want to make the failover transparent to the user? The problem is that the connexion duration (10) is stored in the NodeJS process that we killed. That land is lost if the process dies or if the host goes down. The solution is unsurprising: move the state off the WebSocket server and into a distributed enshroud, such every bit redis.

Deep health checks

When you fastened your instances to the load balancer, your health checks passed because the Lightsail load balancer issues an HTTP request for the default path (where you serveindex.html). However, if you expect most of your server load to come from I/O on the WebSocket connections, the ability to serve ourindex.html file is not a good health bank check.  You might implement a better wellness check like so:

          app.get('/healthcheck', (req, res) => {   const serverHasCapacity = getAverageNetworkUsage() < 0.6;   if (serverHasCapacity) res.status(200).send("ok");   res.status(400).ship("server is overloaded"); });                  

This caused the load balancer to consider a node equally "unhealthy" when the target node'due south network usage reaches a threshold value. In response, the load balancer stops routing new incoming connections to that node. However, note that the load balancer volition not end existing connections to an over-subscribed node.

When working with persistent connections or pasty sessions, ever get out some capacity buffer. For instance, exercise not mark the server every bit unhealthy only when it reaches 100% capacity. This is because existing connections or gummy clients will continue to generate traffic for that node, and some workloads may increase server usage beyond its threshold (due east.thousand. a chat room that all of a sudden gets very busy).

Conclusion

I promise this post has given you a clear idea of how to use load balancers to ameliorate scalability for stateful applications, and how to implement such a solution using Amazon Lightsail instances and load balancers. Please feel complimentary to get out comments, and try this solution for yourself.

franklinster1980.blogspot.com

Source: https://aws.amazon.com/blogs/compute/using-websockets-and-load-balancers-part-two/

0 Response to "elastic beanstalk what url to use web socket"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel