Hey Dev's
In today's tutorial, let us configure a TURN Server. The term stands for Traversal Using Relay NAT, and it is a protocol for relaying network traffic.
There are currently several options for TURN servers available online, both as self-hosted applications (like the open-source COTURN project) and as cloud-provided services.
Once you have a TURN server available online, all you need is the correct RTCConfiguration for your client application to use it. The following code snippet illustrates a sample configuration for an RTCPeerConnection where the TURN server is using the public IP of EC2 and is running on port 3478. The configuration object also supports the username and password properties for securing access to the server. These are required when connecting to a TURN server.
const iceConfiguration = {
iceServers: [
{
urls: 'turn:18.23.4.56.7:3478',
username: 'username',
credential: 'password'
}
]
}
const peerConnection = new RTCPeerConnection(iceConfiguration);
In today's tutorial, we will go through configuring a TURN server using coturn open source project. To read more about coturn Project https://github.com/coturn/coturn
Coturn server configuration
Launch an ubuntu EC2 in your AWS account (Use T2-micro for this tutorial but for production choose a larger instance ) and ssh into it. Once you successfully ssh into the EC2 then you will need to make some changes to make it work.
STEPS
- Log into AWS Console and search for EC2
Search for ubuntu instance, select a T2 micro and continue with the default settings.
Ensure you create a private key and download it. Covert the pem file to ppk so that you can use it in puttygen.
To read more about launching EC2 https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html
- SSH into the EC2, update the instance and install coturn package.
// update
sudo apt update
// install coturn
sudo apt-get install coturn
- With the coturn package installed ensure it always starts on system reboot. To achieve this run the following commands.
// enable and start coturn service
sudo systemctl enable coturn
sudo systemctl start coturn
sudo systemctl status coturn
OR edit the following file.
// edit the following file
sudo vim /etc/default/coturn
// uncomment the following line and save
TURNSERVER_ENABLED=1
- To configure coturn we need to edit the following file /etc/turnserver.conf. Before editing create a backup so that in case you need to start over you can copy the backup file.
sudo cp /etc/turnserver.conf /etc/turnserver.conf.backup
- Uncomment and edit the following lines in the file.
# turnserver listening port
listening-port=3478
tls-listening-port=5349
# provide the public IP for your EC2
listening-ip=<PUBLIC_IP>
external-ip=<PUBLIC_IP>
# ports
min-port=49152
max-port=65535
# enable verbose logging
verbose
# use fingerprint in Turn message
fingerprint
# enable a log-term credential mechanism
lt-cred-mech
# server name
server-name=turnserver
# domain name
realm=odongo.com
# provide username and password
user=<USERNAME>:<PASSWORD>
# log file path
log-file=/var/tmp/turn.log
For more details about the configurations in the turnserver.conf https://github.com/coturn/coturn/wiki/turnserver.
That is all we have to do to configure a coturn server in an ubuntu EC2 in AWS. To test your TURN server go to this link https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/.
Large production deployment
The above setup was a single ubuntu EC2 turn-server that can serve your personal project. For large production, we need to change the approach. We have two approaches that we can consider.
Deploying TURN server in a large EC2 Instance.
This has some disadvantages for example you will have to deploy a similar instance in a different AZ for Disaster Recovery. Scaling will be an issue once your EC2 has reached maximum thresh-hold.Deploying Load balance and Autoscaling Group
This is the approach that will be ideal for large production. We will need to configure a Classic Load Balancer and AutoScaling group.
STEPS
- Let us create a new ubuntu EC2 and configure TURN server in it using coturn. Select a larger EC2 Instance depending on what your company allows and configure as above. The only difference will be in the following:
// provide the classic load balancer DNS
listening-ip= corturn-server-********.us-east-1.elb.amazonaws.com
external-ip= corturn-server-********.us-east-1.elb.amazonaws.com
NOTE
Load Balancers always perform health checks on EC2 to determine the health status of EC2 Instances in Autoscaling groups. To perform health checks they always ping EC2 Instances and therefore we need to declare a path. For our Instances, we will install Nginx to allow pinging of our EC2 Instances. In your EC2 security group ensure you open ports 22 and 80.
// replace index.html in the health check ping section with the following
index.nginx-debian.html
- SSH in the ubuntu EC2 and run the following commands to install Nginx
sudo apt-get install nginx
sudo systemctl enable nginx
sudo systemctl start nginx
With the EC2 ready create an autoscaling group. To do that here are the steps.
- Create a snapshot for the EC2 you created above. This will allow replication of the EC2 for each deployment.
Once you have created a snapshot, create an AMI image from the snapshot.
Make sure that the virtualization type is “Hardware-assisted Virtualization”.
Once you have an image of the TURN server the next step will be to create a launch template.
- Specify the AMI, select T2 Micro Instances, and create a launch template.
Now that you have a launch template you can test it by launching an EC2 from the template and testing if it works. Specify 1 as the number of instances.
With the above step succeeded, create an autoscaling group and attach the classic load balance.
That is all we have to do. We now have a Classic Load balancer with ubuntu EC2s in an Autoscaling group. In your application, this is how you will reference this.
const iceConfiguration = {
iceServers: [
{
urls: 'turn:corturn-server-********.us-east-1.elb.amazonaws.com:3478',
username: 'username',
credential: 'password'
}
]
}
const peerConnection = new RTCPeerConnection(iceConfiguration);
For pricing of Classic Load Balance read more here https://aws.amazon.com/elasticloadbalancing/pricing/
BONUS
As a bonus, you can deploy coturn in a container, push it to ECR and deploy to ECS.
- Pull coturn image from Docker Hub.
// pull the coturn docker image
docker pull instrumentisto/coturn
- Run a container for the coturm instrumentisto image. You will declare the configurations we were uncommenting and editing as follows.
// run a coturn container
docker run -d --network=host instrumentisto/coturn -n --log-file=stdout --min-port=49160 --max-port=49200 --lt-cred-mech --fingerprint --no-multicast-peers --no-cli --no-tlsv1 --no-tlsv1_1 --realm=my.realm.org
- To ssh into the container run the following command.
// ssh to the container
docker exec -it <CONTAINER_ID> sh
With that, you can deploy your image to ECR and finally host it in AWS ECS https://dev.to/kevin_odongo35/manage-your-containers-on-aws-ecs-moj
I hope this tutorial will be helpful to someone who has been following my WebRTC tutorials.
Thank you
Top comments (3)
How do I configure inbound traffic?
Reflexive connectivity
[ INFO ] Gathered candidate of Type: srflx Protocol: udp Address: xx.xx.xx.xx
[ WARN ] Could not connect using reflexive candidates, likely due to the network environment/configuration.
i am not able to pass this test .
can you please suggest solution.....
did you ever try the load balancing approach with multiple ec2 instances behind the ELB? looks like both parties need to be connected to the same instance to get it working...