Let me clarify: We have a certain amount of latency when streaming games from both local and internet servers. In either case, how do we improve that latency and what limits will we run in to as the technology progresses?
Let me clarify: We have a certain amount of latency when streaming games from both local and internet servers. In either case, how do we improve that latency and what limits will we run in to as the technology progresses?
Okay, I’ll apologize… For context though, in general, it’s the internet and it’s hard to take “expert” at its word (and even outside of an online context, “expert” is a title I’m often skeptical of … even when it’s assigned to me :) ). I’ve argued with plenty of people (more so on Reddit) that are CS students… It’s just the price of being on the internet I guess, ha
I’m still not sure I agree with your conclusions, but that’s mostly healthy skepticism… because your argument isn’t tracking with … well … physics or distributed computing… more direct “routes” and taking load off “routes” that aren’t the optimal route typically is a great way to speed up a system. It’s definitely true that doing that adds overhead vs just having a few “better” systems do the work (at least from some perspectives), but it’s hard for me to imagine that with sufficient funds it truly makes it worse to give routing algorithms more direct options and/or cut out unnecessary hops entirely.
Reducing “hops” and travel time is kind of the bread and butter of performance work when it comes to all kinds of optimizations in software engineering…
If you want me to ask a question … what’s your explanation for why there are so many more connections in the north east and west coast if more connections slows the whole system down? Why not just have a handful of routes?
You can’t really compare small-scale clusters of highly available services with the scale of the entire Internet, it’s just an entirely different ballgame. Though even in small scale setups, there is always a sweet spot between too many paths and not enough paths - VRRP (which is the protocol usually used for high availability) actually has quite a big overhead, you can’t have too many connections on the same network or it causes lots of problems.
Internet scale routing usually uses BGP, which also has quite a heavy overhead.
I guess all you need to understand is that routing isn’t free, and the more routes, the more overhead. So there’s always going to be a point where adding more routes just makes things slower rather than faster. And BGP… is just a bit of a mess, right now, honestly. The BGP table has grown so big that a lot of older devices can’t keep it in fast memory anymore, so they either have to be replaced with newer hardware or use slow memory (and therefore slow processing of packets). So it’s not really in everyone’s best interests to just keep adding more routes. It’s harder and harder to justify.
I’m not from the US, so at best it would be an educated guess.
Firstly, it’s not as simple as just “more connections is more slow”, it means there’s a greater overhead. If the improvement from adding another line is greater than the overhead, then it can be worthwhile. For example, imagine a simple network with three routers, A, B and C, where A is connected only to B, and C is connected only to B, meaning that B is connected to both A and C. If there is a large amount of traffic between A and C, it may be worth adding a direct connection between them. If there isn’t, then it’s probably not worth doing.
I guess it’s a bit like adding a new road between two existing roads. Is it worth adding a junction and a set of traffic lights to some existing roads, or would that slow down traffic enough not to be worth doing?
Maybe, since you work with software more, it would make sense to put it this way: why don’t you create an index for every single possible column and table in SQL?
Or just look at it like premature optimisation. There’s a saying about premature optimisation in software engineering! ;-)
Another thing to keep in mind though is that there’s definitely still quite a few bad decisions still kicking around from when the internet was new. It takes time and effort to get rid of the legacy junk, same as in programming.