Computer Networks: Crash Course Computer Science #28

hi I’m Kerry Ann and welcome to crash course computer science the Internet is amazing in just a few keystrokes we can stream videos on YouTube hello read articles on Wikipedia order supplies on Amazon video chat with friends and Suites about the weather without a doubt the ability for computers and their users to send and receive information over a global telecommunications network forever change the world a hundred and fifty years ago sending a letter from London to California would have taken two to three weeks and that’s if you paid for express mail today that email takes a fraction of a second this million fold improvement in latency that’s the time it takes for a message to transfer juiced-up the global economy helping the modern world to move at the speed of light on fibre optic cables spanning the globe you might think that computers and networks always went hand-in-hand but actually most computers pre-1970 were humming away all alone however as big computers started popping up everywhere and low-cost machines started to show up on people’s desks it became increasingly useful to share data and resources and the first networks of computers appeared today we’re going to start a three-episode arc on how computer networks came into being and the fundamental principles and techniques that power them [Music] the first computer networks appeared in the 1950s and 60s they were generally used within an organization like a company or research lab to facilitate the exchange of information between different people and computers this was faster and more reliable than the previous method of having someone walk a pile of punch cards or a reel of magnetic tape to a computer on the other side of the building which was later dubbed as sneakernet a second benefit of networks was the ability to share physical resources for example instead of each computer having its own printer everyone could share one attached to the network it was also common on early networks to have large shared storage drives ones too expensive to have attached to every machine these relatively small networks of closeby computers are called local area networks or lands and LAN could be as small as two machines in the same room or as large as a university campus with thousands of computers although many land technologies were developed and deployed the most famous and successful was Ethernet developed in the early 1970s at Xerox PARC and still widely used today in its simplest form a series of computers are connected to a single common Ethernet cable when a computer wants to transmit data to another computer it writes the data as an electrical signal on to the cable of course because the cable is shared every computer plugged into the network sees the transmission but doesn’t notice the data is intended for them or another computer to solve this problem Ethernet requires that each computer has a unique media access control address or MAC address the unique address is put into a header the prefix is any data sent over the network so computers simply listen to the Ethernet cable and only process data when they see their addressing the header this works really well every computer made today comes with its own unique MAC address for both Ethernet and Wi-Fi the general term for this approach is carrier sense multiple access or CSMA for sure the carrier in this case is any shared transmission medium that carries data copper wire in the case of Ethernet and the air carrying radio waves for Wi-Fi many computers can simultaneously sense the carrier has the sense and multiple access and the rate at which the carrier can transmit data is called its bandwidth unfortunately using a shared carrier has one big drawback when network traffic is like computers can simply wait for pilots on the carrier and then transmit their data but as network traffic increases the ability that two computers will attempt to write data at the same time also increases this is called a collision and the data gets all garbled up like two people trying to talk on the phone at the same time fortunately computers can detect these collisions by listening to the signal on the wire the most obvious solution is for computers to stop transmitting wait for silence and then try again problem is the other computer is going to try that too and other computers on the network that have been waiting for the carrier to go silent will try to jump in during any pause this just leads to more and more collisions seeing everyone is talking over one another and has a backlog of things they need to say like breaking up with a boyfriend over a family holiday dinner terrible idea Ethernet had a surprisingly simple and effective fix when transmitting computers detects a collision they wait for a brief period before attempting to retransmits as an example let’s say one second of course this doesn’t work if all the computers use the same wait duration they just collide again one second later so a random period is added one computer might wait 1.3 seconds while another weighs 1.5 seconds with any luck the computer that waited 1.3 seconds will wake up find the carrier to be silent and start transmitting when the 1.5 second computer wakes up a moment later it’ll see the carrier’s in use and will wait for the other computer to finish this definitely helps but doesn’t totally solve the problem so an extra trick is used as I just explained if a computer detects a collision while transmitting it will wait one second plus some random extra time however if it collides again which suggests network congestion instead of waiting another 1 second this time it will wait 2 seconds if it collides again it will wait 4 seconds and then 8 and then 16 and so on until it’s successful with computers backing off the rate of collision goes down and data starts moving again freeing up the network family dinner saved this backing off behavior using an exponentially growing wait time is called exponential back-off both Ethernet and Wi-Fi use it and so do many transmission protocols but even with clever tricks like exponential back-off you can never have an entire University’s worth of computers on one shared Ethernet cable to reduce collisions and improve efficiency we need to shrink the number of devices on any given shared carrier what’s called the collision domain let’s go back to our earlier Ethernet exam Paul where we had six computers on one shared cable aka one collision domain to reduce the likelihood of collisions we can break this network into two collision domains by using a network switch it sits between our two smaller networks and only passes data between them if necessary it does this by keeping a list of what MAC addresses are on what side of the network so if a one’s transmitted to see the switch doesn’t forward the data to the other network there’s no need this means if he wants to transmit two F at the same time the network is wide open and two transmissions can happen at once but if F wants to send data to a then the switch passes it through and the two networks are both briefly occupied this is essentially how big computer networks are constructed including the biggest one of all the infinite which literally interconnects a bunch of smaller networks allowing Internet work communications what’s interesting about these big networks is that there’s often multiple paths to get data from one location to another and this brings us to another fundamental networking topic routing the simplest way to connect two distant computers or networks is by allocating a communication line for their exclusive use this is how early telephone systems work for example there might be five telephone lines running between Indianapolis and Missoula if John picked up the phone wanting to call Hank in the 1910s John would tell a human operator where he wanted to call and they’d physically connect John’s phone line into an unused line running to Missoula for the length of that call that line was occupied and if all five lines were already in use John would have to wait for one to become free this approach is called circuit switching because you’re literally switching whole circuits to route traffic to the correct destination it works fine but it’s relatively inflexible and expensive because there’s often unused capacity on the up side once you have a line to yourself or you have the money to buy one for your private use you can use it to its full capacity without having to share for this reason the military banks and other high importance operations still buy dedicated circuits to connect their data centers another approach for getting data from one place to another is message switching which is sort of like how the postal system works instead of a dedicated route from A to B messages are passed through several stops so if John Y is selected to Hank it might go from Indianapolis to Chicago and then hop to Minneapolis then Billings and then finally make it to Missoula each stop knows where to send it next because they keep a table of wet past letters given a destination address what’s neat about message switching is and it can use different routes making communication more reliable and fault tolerant sticking with our mail example if there’s a blizzard in Minneapolis grinding things to a halt the chicago mail hub can decide to route the letter through omaha instead in our example cities are acting like network rooters the number of hops our message takes along its route is called the hop count keeping track of the hop count is useful because it can help identify routing problems for example let’s say Chicago thinks the fastest route to Missoula is through Omaha but Omaha thinks the fastest route is through Chicago that’s bad because both cities are going to look at the destination address Missoula and end up passing the message back and forth between them endlessly not only is this wasting bandwidth but it’s a routing area that needs to get fixed this kind of error can be detected because the hop count is stored with the message and updated along its journey if you start seeing messages with high hop counts you can bet something has gone awry in the routing this threshold is called the hop limit a downside to message switching is that messages are sometimes big so they can clog up the network because the whole message has to be transmitted from one stop to the next before continuing on its way while a big file is transferring that whole link is tied up even if you have a tiny 1 kilobyte email trying to get through and either has to wait for the big file transfer to finish or take a less efficient route that’s bad the solution is to chop up big transmissions into many small pieces called packets just like with message switching each packet contains the destination address on the network so reaches know where to forward them this format is defined by the Internet Protocol or IP for short a standard created in the 1970s every computer connected to a network gets an IP address you’ve probably seen these as 4 8 bit numbers written with dots in between for example 1 7 2.2 17.7 dot 2 3 8 is an IP address for one of google’s servers with millions of computers online all exchanging data bottlenecks can appear and disappear in milliseconds network routers are constantly trying to balance the load across whatever routes they know to ensure speedy and reliable delivery which is called congestion control sometimes different packets from the same message take different routes through a network this opens the possibility of packets arriving at their destination out of order which is a problem for some applications fortunately there are protocols that run on top of IP like TC PYP they handle this issue we’ll talk more about that next week chopping up data into small packets and passing these along flexible routes with spare capacity it’s so efficient and fault tolerant it’s what the whole internet runs on today this routing approach is called packet switching it also has the nice property of being decentralized with no central authority or single point of failure in fact the threat of nuclear attack is why packet switching was developed during the Cold War today roof is all over the globe work cooperatively to find efficient routing –zz exchanging information with each other using special protocols like the internet control message protocol ICMP and the border gateway protocol BGP the world’s first packet switch Network and the ancestors of the modern Internet was the ARPANET named after the u.s. agency that funded it the Advanced Research Projects Agency here’s what the entire ARPANET looked like in 1974 each smaller circle is a location like a university or research lab that operated a Reuter they also plugged in one or more computers you can see PDP ones IBM system/360 and even an atlas in london connected over satellite link obviously the internet has grown by leaps and bounds in the decades since today instead of a few dozen computers online is estimated to be nearing 10 billion and it continues to grow rapidly especially with the advent of Wi-Fi connected refrigerators thermostats and other smart appliances forming an Internet of Things so that’s part one an overview of computer networks is it a series of tubes well sort of next week we’ll tackle some higher-level transmission protocols slowly working our way up to the world wide web I’ll see you then crash course computer science is produced in association with PBS Digital Studios at their channel you could check out a playlist of shows like brain craft community and PBS infinite series this episode was filmed at the chad and stacey emigholz studio in indianapolis indiana and it was made with the help of all these nice people and our wonderful graphics team is thought cafe thanks for the random access memories I’ll see you next time [Music]

Leave a Reply

Your email address will not be published. Required fields are marked *