Performance of AMPED determined by growth of website and technological advances

Lisa Hsu

If the size of a typical website grows at a greater rate than advances in RAM and disk technology, AMPED architecture will outperform SPED because a larger data set is less likely to fit into main memory cache. Web servers that implement the SPED architecture suffer from performance loss when a client requests a file that is not cached in main memory because the server’s single process will be blocked by a disk read. Other events that require processing, such as accepting a connection, or finding a file from main memory cannot be performed even though they do not need to access the disk. AMPED is more flexible and uses helper functions to avoid stalling its event dispatcher for a disk read. If a request for a file is not likely to be found in main memory, the helper function is called through an IPC channel, which performs the operation that potentially gets blocked. Because the AMPED event handler does not transmit the requested file until it is available in main memory, the event handler is not blocked but free to process other requests. For a similar reason, both MP and MT architectures should also outperform SPED because both MP and MT can handle multiple requests, so if a process in MP (or a thread in MT), is blocked by a disk read there are still other requests that can be served. In reality though, MP and MT do not greatly outperform SPED nor do they rival AMPED due to a number of reasons including waste of cache resources for multiple address spaces, processing devoted to context switching, requirement for synchronization of access control to shared data, lack of shared global variables, requirement that the OS provides support for kernel threads, and difficult to optimize architectures.

AMPED has another advantage over SPED for data sets that increase in size faster than improvements in hardware because RAM speed, size, and price improvements continue to surpass improvements in hard drives. Therefore, as technology improves, client requests involving disk read/write operations will appear to take longer and longer when compared to requests for files that are cached. The bandwidth gap between an access to the cache and to the disk will widen.

On the other hand, if technology advances at a rate comparable to the growth of websites, SPED will outperform AMPED because a requested page will be more likely to fit in main memory. Compared to SPED, AMPED suffers a small performance loss when a page is cached due to the overhead associated with checking for cache residency of the requested content. Additionally, AMPED maintains three different caches – one for pathname translation, mapped files, and response headers – which subtracts from the total memory usable for storing requested content. SPED and AMPED outperform MP and MT regardless of the data set size for the same reasons listed above. However, if hardware advances in a way that facilitates synchronization or context switching, then MP and MT might perform better, though probably still not as well as either of the event-driven architectures.

Realistically, I would expect websites to grow at a faster rate than hardware technology because of the evolution of requested content in light of technological advances. For example, web content was once comprised mainly of simple text files. Now, clients request streaming videos, sound files, and high resolution pictures. Even when RAM doubles approximately every year and d(tech)/dt is exponential, there are still physical limitations to hardware. There are no physical limitations to software, and data set sizes seem bound to increase illimitably. Therefore, AMPED architecture will likely prevail in the future.