Network Latency Milliseconds Per Mile

Techwalla may earn compensation through affiliate links in this story. Learn more about our affiliate and product review process here.
Fiber optic cable.
Image Credit: Photodisc/Photodisc/Getty Images

Measured in milliseconds per mile, latency defines the time that elapses between the moment you send data across a network and when the information reaches its destination. In most cases, latency adds only a fraction of a second to network responsiveness, but even that small amount of time can affect some real-time applications. To cut a network's latency, look at ways to reduce the distance data must travel or smooth the path it takes.

Advertisement

Sources of Latency

Video of the Day

In a vacuum, signals between computers travel at the speed of light, or 186,000 miles per second. In a fiber-optic cable, they slow down closer to 122,000 miles per second. The loss of speed measures roughly 8.2 microseconds per mile, or 0.82 milliseconds per 100 miles. The latency increases if the data packet must pass through a router or a switch, or your network uses NAT: network address translation, a system for sending network packets to your router's address.

Advertisement

Video of the Day

Significance

Network latency matters more with small data packages than with big chunks of information. On big, slow-moving packets, the added drag of a millisecond or two remains all but imperceptible. With small data packets that should move swiftly, the added time can make a significant difference. In real-time voice or video communications, high latency becomes particularly noticeable, especially when it interrupts conversations.

Advertisement

Distance

Reduce the miles a signal must cross and you reduce data latency as well. A single mile of cable produces 0.5 percent of the delay introduced by a 200-mile stretch of cable. If you're planning a new location for an office that will become part of a wide area network, minimize its distance from the next node or the hub of the network. When you can't choose geographic locations to favor network traffice, consider alternative solutions.

Advertisement

Advertisement

Other Alternatives

Network switches that use hardware-assisted forwarding cut latency considerably because it helps steer packets to the right address. Anticipate an added delay of only 25 microseconds going through the switch, much less than other switches would cause. Heavy congestion on your network means that packets come in to a router faster than they can leave. Cutting the congestion cuts latency. Adding hardware to allow parallel processing, in which the network handles several jobs at once, also helps.

Advertisement

Advertisement

references & resources

Report an Issue

screenshot of the current page

Screenshot loading...