"We are back" « oc.at

Google überarbeitet http

Hansmaulwurf 13.11.2009 - 12:04 1136 4
Posts

Hansmaulwurf

u wot m8?
Avatar
Registered: Apr 2005
Location: VBG
Posts: 5639
Zitat
Bei Google will man offenbar wirklich keine Ebene auslassen, um die Web-Benutzung flotter zu machen: So widmet man sich nicht nur seit vergangenem Jahr einem eigenen Browser, dank SPDY (ausgesprochen "speedy") soll nun gleich die gesamte Web-Kommunikation beschleunigt werden.
http://derstandard.at/fs/1256744718...so-flott-machen

Chrome-Blog:
http://blog.chromium.org/2009/11/2x-faster-web.html

SSL-verschlüsseltes HTTP-Protokoll, das kann ja nur geil sein, bin gespannt wie sich das weiter entwickelt :)

nexus_VI

Overnumerousness!
Avatar
Registered: Aug 2006
Location: südstadt
Posts: 3772
Zu dem Thema hat Fefe auch schon was geschrieben: http://blog.fefe.de/?ts=b402b9c9

COLOSSUS

Administrator
GNUltra
Avatar
Registered: Dec 2000
Location: ~
Posts: 12142
Ich teile da fefes Ansichten - das klingt mir nicht besonders sinnvoll, und wird sich wohl auch nicht durchsetzen.

mat

Administrator
Legends never die
Avatar
Registered: Aug 2003
Location: nö
Posts: 25538
Und wo kann man im Whitepaper von SPDY noch einmal herauslesen, dass Ressourcen unnötigerweise heruntergeladen werden?

Hier die Ziele des erweiterten Protokolls:
Zitat
The SPDY project defines and implements an application-layer protocol for the web which greatly reduces latency. The high-level goals for SPDY are:
To target a 50% reduction in page load time. Our preliminary results have come close to this target (see below).
To minimize deployment complexity. SPDY uses TCP as the underlying transport layer, so requires no changes to existing networking infrastructure.
To avoid the need for any changes to content by website authors. The only changes required to support SPDY are in the client user agent and web server applications.
To bring together like-minded parties interested in exploring protocols as a way of solving the latency problem. We hope to develop this new protocol in partnership with the open-source community and industry specialists.

Some specific technical goals are:
To allow many concurrent HTTP requests to run across a single TCP session.
To reduce the bandwidth currently used by HTTP by compressing headers and eliminating unnecessary headers.
To define a protocol that is easy to implement and server-efficient. We hope to reduce the complexity of HTTP by cutting down on edge cases and defining easily parsed message formats.
To make SSL the underlying transport protocol, for better security and compatibility with existing network infrastructure. Although SSL does introduce a latency penalty, we believe that the long-term future of the web depends on a secure network connection. In addition, the use of SSL is necessary to ensure that communication across existing proxies is not broken.
To enable the server to initiate communications with the client and push data to the client whenever possible.

Und hier so ca. das was atm falsch läuft:
Zitat
Single request per connection. Because HTTP can only fetch one resource at a time (HTTP pipelining helps, but still enforces only a FIFO queue), a server delay of 500 ms prevents reuse of the TCP channel for additional requests. Browsers work around this problem by using multiple connections. Since 2008, most browsers have finally moved from 2 connections per domain to 6.
Exclusively client-initiated requests. In HTTP, only the client can initiate a request. Even if the server knows the client needs a resource, it has no mechanism to inform the client and must instead wait to receive a request for the resource from the client.
Uncompressed request and response headers. Request headers today vary in size from ~200 bytes to over 2KB. As applications use more cookies and user agents expand features, typical header sizes of 700-800 bytes is common. For modems or ADSL connections, in which the uplink bandwidth is fairly low, this latency can be significant. Reducing the data in headers could directly improve the serialization latency to send requests.
Redundant headers. In addition, several headers are repeatedly sent across requests on the same channel. However, headers such as the User-Agent, Host, and Accept* are generally static and do not need to be resent.
Optional data compression. HTTP uses optional compression encodings for data. Content should always be sent in a compressed format.
Daher rührt also die Verwirrung. Das heißt doch nur, dass der Server es erzwingen kann, wenn es die Ressource für nötig sieht. Das ist ohnehin schon dringend nötig, denn was man alles auf "No Cache" setzen muss, damit ein Skript heutzutage in allen Browsern läuft (thx but not thx Opera!), ist einfach zu viel des Guten.

Allgemein zu dem Thema: Das HTTP-Protokoll ist alt und beinhaltet viel Mist. Das hat "fefe" ja auch grade noch erkannt. Das will ja SPDY auch aufräumen, aber zusätzlich noch umstrukturieren. Die Vorteile kommen allerdings erst dann zum Tragen, wenn man die Funktionsweise von aktuellen Browsern (und das ist der Hauptgrund, dass HTTP überhaupt existiert) auseinander nimmt. Wie im Whitepaper erwähnt, benutzen die heutigen Browser mehrere HTTP-Verbindungen gleichzeitig, um diverse Delays zu umgehen. Dabei entsteht eine Menge Overhead, weil das Protokoll nicht für x Verbindungen ausgelegt ist. Gute Idee imo.

Realisierung wäre auch problemlos möglich, wenn es dafür passende Apache-Module gäbe und der Browser (per Client-Einstellung) optional darauf zugreifen kann. Es wäre halt ein langwieriger Prozess und eine komplette Umstellung ist vielleicht nie oder nur in vielen Jahren möglich, aber langfristig sicherlich die bessere Wahl.

Edit: Aktualisiert. :)

JC

Vereinsmitglied
Disruptor
Avatar
Registered: Feb 2001
Location: Katratzi
Posts: 9066
Zitat
It also looks like this protocol is designed by Web people, rather than network people. How the IETF applications area will respond to this effort is a big unknown. For instance, one thing that isn't mentioned in the protocol specification is how a browser knows that it should set up a SPDY connection rather than an HTTP connection. Are we going to see SPDY:// in URLs rather than HTTP:// ? That wouldn't work with browsers that don't support the new protocol.

It's for reasons like this that the IETF isn't a big fan of replacing protocols wholesale. It's much more in line with the IETF way of doing things to add the new features proposed in SPDY to a new—but backward-compatible—version of HTTP. Designing a new protocol that does everything better than an existing protocol usually isn't the hard part. The real difficulty comes in providing an upgrade path that allows all the Internet users to upgrade to the new protocol in their own time such that everything keeps working at every point along that path.

This is something the SPDY developers recognize. There are proposals for running HTTP over SCTP, a protocol similar to TCP, but with the ability to multiplex several data streams within a single session. That would have some of the same advantages as SPDY. Unfortunately, most home gateways don't know about SCTP and can only handle TCP and UDP, so HTTP over SCTP would face a long, uphill battle, not unlike IPv6, but without the ticking clock that counts down the available IPv4 addresses.
Quelle: Ars Technica
Kontakt | Unser Forum | Über overclockers.at | Impressum | Datenschutz