During the installation, Docker creates three different networking options.
You can list them with docker network ls:
bojana@linux:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
0c872e6d6453 bridge bridge local
10826dd62a8b host host local
cab99af2344e none null local
By default, bridge mode is selected, and containers reside on a private namespaced network within the host.
In the previous post, where we explored basic docker commands, we use docker run -p, to map a port from the host. This makes Docker create iptables rules that route traffic from the the host to the container.
“None” entry in the above list indicates that no configuration should be performed by Docker whatsoever. It is intended for custom networking requirements.
If we want explicitly to select container’s network, we pass --net to docker run.
A bridge is a Linux kernel feature that connects two network segments.
When you installed Docker, it quietly created a bridge called docker0 on the host.
You can verify that by issuing command ip addr show. Here is output on my machine:
bojana@linux:~$ ip addr show
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 96:00:00:b1:5e:b3 brd ff:ff:ff:ff:ff:ff
inet 188.34.194.63/32 scope global dynamic eth0
valid_lft 74220sec preferred_lft 74220sec
inet6 2a01:4f8:1c1c:a675::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::9400:ff:feb1:5eb3/64 scope link
valid_lft forever preferred_lft forever
3: docker0: mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:b6:9c:25:22 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:b6ff:fe9c:2522/64 scope link
valid_lft forever preferred_lft forever
5: veth51c3665@if4: mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 96:4d:57:fa:c0:99 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::944d:57ff:fefa:c099/64 scope link
valid_lft forever preferred_lft forever
docker0 is the virtual Ethernet bridge that uses 172.17.0.1/16 range.
veth51c3665 is the host side of the virtual interface pair that connect the container to the bridged network.
As we said in the previous article, by default when you launch a container, there are no published ports, so container would not be visible from outside the docker host, still we can access it from the docker host itself.
We will use nginxdemo/hello image in this example, because it contains the simple web server that prints out the IP address of the container, so a view form the inside, along with other things.
bojana@linux:~$ docker run -d nginxdemos/hello
Let’s connect to this running container and see what IP address it got assigned.
To access a shell of a running container we use:
bojana@linux:~$ docker exec -it jovial_wu /bin/ash
Note: jovial_wu is a random container name assigned when creating it, because we didn’t specify any, check yours with docker ps command
ip addr show on the container command line reveals:
/ # ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
We see that the address is from the above mentioned docker0 range.
If we paste this address to the browser, we should get nginx demo page:
It shows us what it sees from the inside of container, such as IP on the bridge plus the servername which is a linux hostname of the container (container id)
How can containers communicate with each other on the bridge?
Let’s create a second instance of the container:
docker run -d nginxdemos/hello
And connect to it:
docker exec -it inspiring_mcnulty2 /bin/ash
If we check the IP we see again that it’s from the bridge range:
/ # ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:04 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.4/16 brd 172.17.255.255 scope glo
We can also ping the previously created container:
/ # ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.204 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.189 ms
64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.181 ms
64 bytes from 172.17.0.3: seq=3 ttl=64 time=0.192 ms
^C
--- 172.17.0.3 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.181/0.191/0.204 ms
This means we do have IP connectivity between two containers on the same bridge.
If we issue an ip route command, we can see there is a default route via the Docker host’s IP
on the bridge, which acts as a gateway.
We can also see that we have internet access from within the container, and the name resolution works:
/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=111 time=18.209 ms
64 bytes from 8.8.8.8: seq=1 ttl=111 time=18.524 ms
64 bytes from 8.8.8.8: seq=2 ttl=111 time=18.116 ms
64 bytes from 8.8.8.8: seq=3 ttl=111 time=18.414 ms
^C
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 18.116/18.315/18.524 ms
/ # ping google.com
PING google.com (142.250.185.110): 56 data bytes
64 bytes from 142.250.185.110: seq=0 ttl=112 time=28.885 ms
64 bytes from 142.250.185.110: seq=1 ttl=112 time=28.785 ms
64 bytes from 142.250.185.110: seq=2 ttl=112 time=29.383 ms
64 bytes from 142.250.185.110: seq=3 ttl=112 time=28.670 ms
^C
-------- google.com ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 28.670/28.930/29.383 ms
In the default bridge configuration, all containers can communicate with one another because they are all on the same virtual network. However, you can create additional network namespaces to isolate containers from one another.
We said that the most common way of making a docker container visible from the outside is port mapping.
Let’s create another container this time specifying port mapping:
bojana@linux:~$ docker run -d nginxdemo/hello -p 81:80
One thing we should point out though is the fact that the name resolution between docker containers doesn’t work.
If we would attach to the above created container and tried to ping the first or the second container we created by it’s hostname, we wouldn’t be able to do so:
/ # hostname
0888f0c68423
/ # ping 167bcc074170
ping: bad address '167bcc074170'
User defined bridges
In principle, host name resolution is possible between the containers, just not on the default bridge.
Which means we have to create our own.
Use the docker network create command to create a user-defined bridge network:
bojana@linux:~$ docker network create --driver=bridge --subnet=172.172.0.0/24 --ip-range=172.172.0.128/25 --gateway=172.172.0.1 my-br0
You can go ahead and stop/remove all the containers we created so far so we have a clean slate.
Now, let’s create a new container that uses our newly created bridge:
bojana@linux:~$ docker run -d --name my-nginx --network my-br0 --hostname my-nginx nginxdemos/hello
And another instance with different name and hostname:
bojana@linux:~$ docker run -d --name my-nginx2 --network my-br0 --hostname my-nginx2 nginxdemos/hello
Now, if we use the docker inspect to check the network settings of newly created container:
bojana@linux:~$ docker inspect -f '{{ json .NetworkSettings.Networks }}' my-nginx
{"my-br0":{"IPAMConfig":null,"Links":null,"Aliases":["b91bde6aa527","my-nginx"],"NetworkID":"f9e797b2ab8826342ea8343b8bebe8e1459eea114aa0b015f56b01f90fe39ff8","EndpointID":"a1af70992c90967ebca917eeebaa67b4e7bc23bdfb2f352f283faaee223d4fc0","Gateway":"172.172.0.1","IPAddress":"172.172.0.128","IPPrefixLen":24,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:ac:00:80","DriverOpts":null}}
We see that our network settings were applied and containers have assigned, or if you will, joined our defined network. Now, if we try to ping one container from the other by it’s hostname, it will work:
/ # ping my-nginx2
PING my-nginx2 (172.172.0.129): 56 data bytes
64 bytes from 172.172.0.129: seq=0 ttl=64 time=0.166 ms
64 bytes from 172.172.0.129: seq=1 ttl=64 time=0.149 ms
64 bytes from 172.172.0.129: seq=2 ttl=64 time=0.211 ms
64 bytes from 172.172.0.129: seq=3 ttl=64 time=0.134 ms
^C
-------- my-nginx2 ping statistics ---
And also if we ping by hostname another container:
/ # ping my-nginx
PING my-nginx (172.172.0.128): 56 data bytes
64 bytes from 172.172.0.128: seq=0 ttl=64 time=0.342 ms
64 bytes from 172.172.0.128: seq=1 ttl=64 time=0.134 ms
64 bytes from 172.172.0.128: seq=2 ttl=64 time=0.191 ms
64 bytes from 172.172.0.128: seq=3 ttl=64 time=0.128 ms
^C
-------- my-nginx ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.128/0.198/0.342 ms
The conclusion is that Docker does provide name resolution between containers over a self-defined bridge.
Credits: A lot of content is borrowed from this excellent video by OneMarcFifty on youtube. Go and check the channel, it has some really interesting content, presented in a really nice and clean way. For this post, I tried to use just docker commands without having to use portainer and add some additional notes.
Post originally published on : bojana.dev
Top comments (0)