Home > Cloud Computing, Cloud Security, Uncategorized > Incomplete Thought: Virtual Machines Are the Problem, Not the Solution…

Incomplete Thought: Virtual Machines Are the Problem, Not the Solution…

September 25th, 2009 Leave a comment Go to comments

simplicity_complexityI’m an infrastructure guy. A couple of days ago I had a lightbulb go on.  If you’re an Apps person, you’ve likely already had your share of illumination.  I’ve just never thought about things from this perspective.  Please don’t think any less of me 😉

You can bet I’m talking above my pay grade here, but bear with my ramblings for a minute and help me work through this (Update: I’m very happy to see that Surendra Reddy [@sureddy – follow him] did just that with his excellent post – cross-posted in the comments below, here. Also, check out Simon Crosby’s (Citrix CTO) post “Wither the venerable OS“)

It comes down to this:

Virtual machines (VMs) represent the symptoms of a set of legacy problems packaged up to provide a placebo effect as an answer that in some cases we have, until lately, appeared disinclined and not technologically empowered to solve.

If I had a wish, it would be that VM’s end up being the short-term gap-filler they deserve to be and ultimately become a legacy technology so we can solve some of our real architectural issues the way they ought to be solved.

That said, please don’t get me wrong, VMs have allowed us to take the first steps toward defining, compartmentalizing, and isolating some pretty nasty problems anchored on the sins of our fathers, but they don’t do a damned thing to fix them.

VMs have certainly allowed us to (literally) “think outside the box” about how we characterize “workloads” and have enabled us to begin talking about how we make them somewhat mobile, portable, interoperable, easy to describe, inventory and in some cases more secure. Cool.

There’s still a pile of crap inside ’em.

What do I mean?

There’s a bloated, parasitic resource-gobbling cancer inside every VM.  For the most part, it’s the real reason we even have mass market virtualization today.

It’s called the operating system:

Virtualization

If we didn’t have resource-inefficient operating systems, handicapped applications that were incestuously hooked to them, and tons of legacy networking stuff to deal with that unholy affinity, imagine the fun we could have.  Imagine how agile and flexible we could become.

But wait, isn’t server virtualization the answer to that?

Not really.  Server virtualization like that pictured in the diagram above is just the first stake we’re going to drive into the heart of the frankenmonster that is the OS.  The OS is like Cousin Eddie and his RV.

The approach we’ve taken today is that the VMM/Hypervisor abstracts the hardware from the OS.  The applications are still stuck on top of operating systems that don’t provide much in the way of any benefit given the emergence of development frameworks/languages such as J2EE, PHP, Ruby, .NET, etc. that were built around the notions of decoupled, distributed and mashable application “fabrics.”

Every ship travels with an anchor, in the case of the VM it’s the OS.

Imagine if these applications didn’t have to worry about the resource-hogging, control-freak, I/O limiting, protected mode schizophrenia and de-privileged ring spoofing of hypervisors associated with trying not conflict with or offend the OS’s sacred relationship with the hardware beneath it.

Imagine if these application constructs were instead distributed programmatically, could intercommunicate using secure protocols and didn’t have to deal with legacy problems. Imagine if the VMM/Hypervisor really was there to enable scale, isolation, security, and management.  We’d be getting rid of an entire layer.

If that crap in the middle of the sandwich makes for inefficiency, insecurity and added cost in virtualized enterprises, imagine what it does at the Infrastructure as a Service (IaaS) layer in Cloud deployments where VMs — in whatever form — are the basis for the operational models.  We have these fat packaged VMs with OS overhead and attack surfaces that really don’t need to be there.

For example, most of the pre-packaged AMIs found on AWS are bloated general purpose operating systems with some hardening applied (if at all) but there’s just all that code… sitting there…doing nothing except taking up storage, memory and compute resources.

Why do we need this?   Why don’t we at at least see more of a push towards JEOS (Just Enough OS) in the meantime?

I think most virtualization vendors today who are moving their virtualization offerings to adapt to Cloud, are asking themselves the same questions and answering them by realizing that the real win in the long term — once enterprises are done with consolidation and virtualization and hit the next “enterprise application modernization” cycle — will be  to develop and engineer applications directly around platforms which obviate the OS.

So these virtualization players are  making acquisitions to prepare them for this next wave — the real emergence of Platform as a Service (PaaS.)

Some like Microsoft with Azure are simply starting there.  Even SaaS vendors have gone down-stack and provided PaaS offerings to further allow for connectivity, integration and security in the place they think it belongs.

In the case of VMware and their acquisition of SpringSource, that piece of bloat in the middle can be seen as simply going away; whatever you call it, it’s about disintermediating the OS completely and it seems to me that the entire notion of vApps addresses this very thing.  I’m sure there are a ton of other offerings that I simply didn’t get before that are going to make me go “AHA!” now.

I’m not sure organizationally or operationally that most enterprises can get their arms around what it means to not have that OS layer in the middle in the short term, but this new platform-oriented Cloud is really interesting.  It makes those folks who may have made the conversion from server-hugger to VM-hugger and think they were done adapting, quite uncomfortable.

It makes me uncomfortable…and giddy.

All the things I know and understand about how things at the Infrastructure layer s interacts with applications and workloads at the Infostructure layer will drastically change.  The security models will change.  The solutions will change.  Even the notion of vMotion — moving VM’s around — will change.  In fact, in this model, vMotion isn’t really relevant.

Admittedly, I’ve had to call into question over the last few days just how relevant the notion of “Infrastructure 2.0” is within this model — at least how it’s described today.

Cloud v1.0 with all it’s froth and hype is going to be nothing compared to Cloud 2.0 — the revenge of SOA, web services, BPM, enterprise architecture and the developer.  Luckily for the sake of us infrastructure folks we still have time to catch up and see the light as the VMM buys us visibility and a management plane.  However, the protocols and models for how applications interact with the network are sure going to change and accelerate due to Cloud — at least they should.

Just look at how developments such as XMPP and LISP are going to play in a PaaS-centric world…

Like I said, it’s an incomplete thought and I’m not enlightened enough to frame a better discussion in written form, but I can’t wait until the next Infrastructure 2.0 Working Group to bring this up.

I have an appreciation for such a bigger piece of the conversation now.  I just need to get more edumacated.

My head ahsplodes.

Any of this crap make sense to you?

/Hoff

  1. September 25th, 2009 at 17:20 | #1

    Yes, and no. I think you have ID'ed the fundamental issue and I couldn't agree more. I'm not sure the OS 'goes away', though. OSes have always just been a generalized 'platform' to which people code. All we are talking about is morphing from generalized platforms to application specific ones that are probably on-demand and just-in-time.

    Whether it's an 'specialized OS', a JVM, or Emacs running direct on the hypervisor, I'm not sure it matters.

  2. September 25th, 2009 at 17:33 | #2

    @Randy Bias So if I refined the statement to say that general purpose OS's within VM's will be replaced with JEOSes to support specific application frameworks/languages atop the VMM, does that make it cleaner?

    /Hoff

  3. Tadd Axon
    September 25th, 2009 at 17:40 | #3

    Back to the future time and the rebirth of the mainframe? BSD jails coming back into vogue?

  4. September 25th, 2009 at 17:43 | #4

    @Beaker

    Yes.

    @TaddAxon

    Sorry man, 'jails' have been 'in' for a while. 😉 See OpenVZ and Solaris Zones.

  5. September 25th, 2009 at 17:51 | #5

    @Tadd Axon I started to add a chunk pertaining to OS vendors like RedHat and Sun (Solaris) but it would've made the post even longer. I'm hoping Glenn will comment from that perspective.

    Imagine how weirdly schizophrenic Microsoft folks must feel: you've got the traditional desktop OS folks, W2K8 with Hyper-V and Azure. Hedging the bets, sure, but wow how much tension must there be there?

    I guess I'll ask next month @ Bluehat 😉

  6. September 25th, 2009 at 18:05 | #6

    –Randy's "All we are talking about is morphing from generalized platforms to application specific ones" is a pretty huge statement? What is specialized in each platform for each application? What stays generic.

    –Does Google's current architecture have any sway in how you are thinking about this? The file and data side seem to be as important as well?

    –Just to be devil's advocate I kind of like virtualization with a shared kernel architecture like how Solaris does zones. Its SO much less overhead. Is it possible that will eventually have any long term impact once the froth of windows virtualization dies down.

    This is a really important topic, just hard to pick the timing. It also takes an understanding of dominant designs vs. optimal ones…

  7. September 25th, 2009 at 19:18 | #7

    @James Watters

    Good point. That is a big statement. I think it can be broken down, though. It's like asking what's generic and what's specialized in the JVM. Same as in the 'OS'. Answer: "It depends". Hence, my point that just-in-time JeOS seems likely. Sort of like FastScale, but without the marketing hype.

    Also make sure you check out AppZero. That's another kind of containerization that I think is likely to gain traction.

  8. September 25th, 2009 at 19:27 | #8

    Toutvirtual had a good write-up on this:

    http://toutvirtual.com/blogs/2008/03/17/why-do-hy….

    I like to call this hypervisor-hosted VMs "fake machines" rather than real "virtual machines":

    http://stage.vambenepe.com/archives/135

  9. September 25th, 2009 at 19:38 | #9

    @James Watters I'll tell you what, though. If I'm a VC looking at this from the perspective of a 5-7 year investment in enterprise systems (do any of them do that anymore?), I'm thinking from an applications/developer prespective, not a data center perspective.

    That's not to say data center isn't critical (both Beaker and I have written posts about this in the past year), but more and more the enterprise will build applications with less and less dependency on specific physical systems…or abstracted physical systems. The focus turns more to distributed services, distributed application execution and distributed data management.

    The only problem with the cloud today, though, is that the choice of application architectures is rather tightly coupled to the physical architectures used to run the cloud. Amazon only allows EC2 instances to run on one network. The whole separation of concerns in networking terms is not an option in most existing IaaS clouds.

    One of two things will happen:

    – Clouds will gain configurability for storage, compute and networking to be controlled directly by the applications and/or services; bypassing the need for a sophisticated OS, and allowing an ever expanding selection of applications to be supported

    – Applications will find ways to abstract what they want from the underlying architectures.

    If you think the latter sounds like the better choice, catch Hoff at BlueHat this year.

    James

  10. September 25th, 2009 at 20:21 | #10

    JEOS is already happening out there. BEA were probably first to market with the vapourware in the form of their 'Bare Metal' project, but others like RPath have followed since with more substance.

    The real trick will be assembling just enough OS to provide all of the services needed by whatever application stack is sitting on top of it, which turns into a hairy formal methods problem (what could this app ask an OS to do?) or a hairy empirical profiling problem (if I can provide unit tests with 100% coverage which OS services get used?). This was certainly part of the vision for the CohesiveFT guys when they built elastic server on demand, but how much OS is inside the can has become a sideshow compared to the more real problem of what's needed to wire many cans together and make them a) work and b) manageable.

    PaaS helps to a degree, there will be some OS bits that a Spring stack or whatever simply never could call, so they can be ditched. Raising the general purpose bar from the OS to the PaaS layer doesn't help a great deal though – you still need a ton of stuff there just in case an application might find it useful.

    Another key consideration is monitoring. In the past infrastructure guys have been often deeply embroiled in the business of ensuring apps function to their SLA. IaaS makes the infrastructure SLA much coarser grained (and easy to measure by both sides). This throws a whole nasty sack of complexity over the infrastructure/app dev wall into the hands of the app support guys (where they exist), so a good PaaS should probably have some service level management stuff in the bake too.

    You mention LISP, and it's probably worth having a call out to Haskell, Erlang and some of the other functional languages that are gaining traction. These things can spew out code that's capable of devouring a lot of cores, but what's providing the abstraction between the cores and the code? The hypervisor management layer, the (JE)OS, something else?

  11. September 25th, 2009 at 22:22 | #11

    Despite spending an inordinate amount of time at the infrastructure layer (along with everyone else) I've said for a long time that it's the most boring of the three. I was just ranting about this yesterday on the CloudCamp London unpanel while explaining that the real return (but also the real effort in terms of standardisation etc.) is in moving further up the stack into the platform and application layer. That is to say that I agree completely that the OS is like a cancer that sucks energy (e.g. resources, cycles), needs constant treatment (e.g. patches, updates, upgrades) and poses significant risk of death (e.g. catastrophic failure) to any application it hosts.

    There are many different types of virtualisation (e.g. compute, storage, network, etc.) and you can rule a line at pretty much whatever layer of the solution stack you like from the "bare metal" to the OS to the various APIs it exposes (e.g. libc) to the runtime (e.g. JVM, CLR) or even in the application themselves (think multi-tenancy). The trick is to virtualise at the right point in the stack *for the application*.

    There's nothing to say you can't build your line of business application on Force.com (application layer), feed it orders from an application running on Google App Engine (platform layer) and process payments with a PCI compliant[1] machine image (infrastructure layer).

    Sam

    1. PCI will eventually have to be revised to cater for cloud but will still likely require the highest possible level of isolation. In the mean time there's nothing stopping providers from deploying customer images onto real rather than virtual machines ala Dedibox.

  12. Spaf
    September 25th, 2009 at 22:37 | #12

    You are on the path to enlightenment now.

    Operating systems exist for 2 reasons: provide high-level interfaces to commonly-used services, and to control shared access to expensive/rare items. But both of those were 1960s issues. Processors and memory are no longer expensive, and some of the services offered are no longer common. VMs are simply a step along the way to building dedicated systems for particular tasks.

    VMs are a patch — a kludge — that use current hardware and software. But they are hardly optimal.

    Next step in the evolution will be VMs that include emulation of some base OS calls so client apps still get the simple, clean interfaces thru the VM executive. It is then only a short step to compiling the app with a custom library to run stand-alone on a processor core.

  13. September 25th, 2009 at 22:42 | #13

    In other words, Windows is just the boot loader for Outlook.

  14. September 26th, 2009 at 00:10 | #14

    Chroots/Zones/etc. are another option for "lightweight" VMs but I don't see a problem with a provider exposing e.g. Linux Standard Base or Win32 APIs and metering resource usage. That way you get everything you'd usually get from an OS without having to worry about the OS itself.

    Sam

  15. September 26th, 2009 at 04:44 | #15

    @Randy Bias I need to learn more about how just in time JeOS like Fastscale really is working, esp how much performance it really creates. Thanks for the pointers.

  16. September 26th, 2009 at 05:04 | #16

    @James Urquhart Really good contrasts in your reply of course, so lots to think about and I hope some granular posts on this topic come out from this whole crew that has replied here.

    There seems to remain a very healthy tension between abstracted, and granular control. For early indicators on highly highly abstracted models I follow simple DB, Gappengine etc, and they constrain the developer in pretty substantial ways today.

    Then there are cases like @sureddy 's Yahoo architecture where everything is granular, specific and dedicated and as he looks for abstractions at the VM level he keeps seeing performance hits and developers get annoyed etc…and then its a question of re-write or tolerate the hit..

    ..at least for me this is a tough race to call…but I'm trying to keep on eye out for early indicators of where its headed..

    James

  17. September 26th, 2009 at 06:47 | #17

    Virtualization technology you are looking for is called Parallels Virtual Containers. Available since 1999, currently in its 4th generation, for both Windows and Linux, part of Linux kernel and supported by Microsoft. Every hosting provider uses it.

  18. September 26th, 2009 at 07:11 | #18

    VMM/Hypervisor for x86 technology exists and flourishes for one reason:

    We don't know what to do with Multi-Core Processors.

    x86-based VMM/Hypervisors are the problem, not the solution. Of course Operating Systems are bloated. Why do you think Microsoft made Windows Server 2008 Core available? Why do you think that they use a customized kernel for Azure? Why do you think that Google knows what they're doing, when nobody else seems to?

    Amazon is bringing cloud to the masses in the way that they understand. But IT doesn't understand the big picture today (and at least not since this whole Internet thing took off). We've been going in circles (i.e. your network/system/app hamster-wheel-of-pain) for about two decades now.

  19. September 26th, 2009 at 07:58 | #19

    @ilya Baimetov Some interesting points and assertions (however exaggerated they appear to me to be.) How does Parallels obviate the bloated operating system issue?

    @Andre Gironda So if I distill your comments I think I arrive at "nod." 😉

  20. mike
    September 26th, 2009 at 10:10 | #20

    Neat concept. Networking, RAM, and storage can be pooled and shared among multiple servers already. I think the missing component is being able to truly virtualize the CPU resources without having to make programmers learn about multi-threading.

  21. September 26th, 2009 at 14:19 | #21

    Great discussion, as always I'm late to the party…

    @Randy Bias, you're spot on thanks for steering towards JEOS. We're talking trimming the bloat, not complete elimination of the OS.

    @James Watters, Come on Google had nine hours of downtime when migrating servers! (; Just kidding, what they've done will certainly sway how workloads are broken up and how data is handled. Re' Solaris shared kernels and zones, I'll have to read up on that–used to follow their kernel development a bit more closely.

    @Chris Swan, @Beaker is referring to Locator/Identifier Separation Protocol (LISP). This is an experimental protocol used to create global addresses by separating them into an End-point Identifer (EID) and a Routing Locator (RLoc). The EID would remain with the End-point even if it moves to a different service provider or from internal to external. Search on 'IETF LISP' for more info. (Yes, bummer of an acronym). Since you mentioned Functional Programming, it's used in the cloud now. Twitter uses Scala (JVM) for much of its backend development and Facebook chat was built with Erlang. The Erlang concurrency model, adopted by Scala too, brings a level of parallelism not possible with shared-state concurrency models.

  22. David O'Berry
    September 26th, 2009 at 17:04 | #22

    A lot of good comments here and I am late to the party…

    Having said that, Hoff has just about nutshelled it. Initially, the issue is that instead of stepping back and fixing the problems, we really just created more and compensated for previous bad coding practices by just recovering better in some instances. Not a wholly bad thing to be sure, especially as it has evolved now, but to do that at first we added a great deal of complexity, far less visibility, and a significant increase in attack surface. The sad part about it is that the OPEX was decreased initially by doing this on the surface but increased substantially over the term as sprawl took hold.

    Now we are coming around and snapping the whip but unfortunately the caboose is still way behind the engine with no real plan on how to keep up. Even Infra 2.0, in current state, is going to stay below the level it would need to operate at to effect any significant change at the level level we need it most. That is why I think the new concepts have to be looked at holistically but practically while staying away from fettering the thought processes with the challenges of the past. If we fail to make that leap then we will basically end up with a Frankenstein existence, always behind the curve…trying to keep from failing instead of trying to win.

    I said this on a panel at Forrester last year, too many practitioners are their own worst enemies at times because they fail to realize that saying no and putting their fingers in their ears lost effectiveness in the 80s and early 90s and derailed completely after the dot com debacle. Yet it happens over and over again compromising the efficacy of even the valid things we are trying to do.

    Things have to change. If not now, when? If not us, who?

    -David

    PS. Long day..this may all make a lot less or a lot more sense to me in the morning.

  23. September 27th, 2009 at 00:02 | #23

    makes perfect sense to me. 🙂

    my light bulb moment came a little while ago, after reading your blog back catalog. i read a post about Google vs. VMware. I pondered a bit, a light went on then i wrote this;

    http://wonkothesane.com/blog/?p=357
    "..the fight is between Google trying to convince companies to consume IT apps in a new way and VMware trying to convince companies their existing way is fine, but that vmware can make it run better…"

    Thanks for the blog. It's a key part of my ongoing career development reading.

  24. September 27th, 2009 at 05:31 | #24

    See Pascal Meunier's post about this:

    http://www.cerias.purdue.edu/site/blog/post/virtu

    All parts of the same story I guess we've been realizing for a whole now 🙂

  25. September 27th, 2009 at 12:11 | #25

    Is virtualization solution to a problem or part of the problem?

    In my view and experience, Virtualization is part of the problem as well as part of the solution. While automation is the key in fulfilling end-to-end service delivery, virtualization is a necessary technology. However, current architectural style of service composition, delivery, and management is mired with problems, workarounds, and band-aids which makes the SLA driven end-to-end service delivery just a promise not the fulfillment. We should stop dishing out nodes to the development. Should stop pushing ACLs into switches. We should stop accessing OS primitives from applications. We should stop writing communication patterns into applications. A well defined abstraction and framework on top of Virtualization is essential to make this happen. We can't ignore the change, configuration, and security management. Simply put it, push-button delivery of services into Cloud securely, reliably, and rapidly. JEOS is first step in that direction.

    Let me share my view on why virtualization is part of problem first and then explain why it is also important for End-to-End service delivery.

    Why it is Part Problem?

    "Geometric complexity" of systems is a (if not the) major contributor to the costs and stability issues we face in our production environments today. In this context, complexity is introduced by the heterogeneity and variations of “OS” needs per application and underlying components (like databases, network, and security etc). These unmanageable or incomprehensible numbers of variations of the Operating Environment makes it hard to understand and optimize our compute infrastructure. We continue to invest our scarce resources to keep this junk alive and fresh all the time. More importantly, 70% of service outages today is caused by configuration or patching errors.

    Christopher Hoff (@beaker) puts it very well,

    “there’s a bloated, parasitic resource-gobbling cancer inside every VM”.

    I was hopeful and optimistic that would change the way applications designed and delivered. Rich application frameworks like J2EE, Spring, Ruby etc evolved but Operating Environment evolved into one big, monolithic, generalized OS making it impossible to track what is needed and what is not. Adding to this brew, mind boggling number of open sources libraries and tools crept into OS. Though Virtualization provided an opportunity to help us correct these sins but in the disguise of virtualization we started to commit more sins. Sadly, instead of wiping out the cancer bits in the operating environment, all the junk packaged into VMs.

    Christopher Hoff (@beaker) raised very thought provoking and stimulating question:

    “if we didn’t have resource-inefficient operating systems, handicapped applications that were incestuously hooked to them, and tons of legacy networking stuff to deal with that unholy affinity, imagine the fun we could have. Imagine how and flexible we could become”.

    This is very true. We have too much of baggage and junk inside our operating environment. That has to change. It is not the question of VMWARE, XEN, Parallels or Linux, Open Solaris or FreeBSD. We need paradigm shift in the way we architect and deliver “services”.

    Sam Johnston (@samj ) pointed out,

    “ I agree completely that the OS is like a cancer that sucks energy(e.g., resources, cycles), needs constant treatment(e.g. patches, updates, upgrades) and poses significant risk of death(e.g. catastrophic failure) to any application it hosts”. Yes, Sam is correct in his characterization or assertion of “Malignant OS”.

    Now turn our chapter to why virtualization is important

    @JSchroedl @AndiMann @sureddy Sounds like we're all in virtual agreement: Not just virtual servers, or even virtual systems, but "Services" end-to-end.

    End to End Service Delivery: My sense of virtualization is that it provides an abstraction to absorb all low-level variations, exposing a much simpler, homogeneous environment. While this is not sufficient to help us deliver the automation needed for End to End Service delivery, it is a necessary technology. Applications/Services won't be exposed to the variations in our operating environment; instead, they will be exposed to a service runtime platform (call it “container” for lack of a better word) with uniform behavioral characteristics and interfaces (please note that “container” is not VM, it is much higher level abstraction that orchestrates hypervisors and operating environments isolating all intricacies of virtualization and operations management etc). We won't need to qualify an innumerable combination of hardware, OS's, and software stacks. Instead, the Container layer will be the point of qualification on both sides: each new variation of hardware will be qualified against a single Container layer, and all software will be qualified (quite literally, providing a fast lane change mechanisms development, test, staging and production (Continuous Integration & Continuous Deployment) against that same Container layer. This is really big deal. It helps us to innovate and roll out new services much faster than before. Virtualization plays important role in fulfilling the end-to-end service delivery.

    Christopher Hoff(@beaker) pointed out,

    “VMs have allowed us to take the first steps towards defining, compartmentalizing, and isolating some pretty nasty problems anchored on the sins of our fathers, but they don’t do a damned thing to fix them. VMs have certainly allowed us to(literally) think out-side the box about how we characterize workloads and have enabled us to begin talking about how we make them somewhat mobile, portable, interoperable, easy to describe, inventory, and in some cases more secure. Cool.”

    Configurastions vs. Customizations: Virtualization also absorbs variations in the configurations of physical machines. With virtualization, applications can be written around their own, long-lasting "sweet spots" of services configurations that are synthesized and maintained at the container.

    Homogeneity: The homogeneity afforded by virtualization extends to the entire software-development lifecycle. By using a uniform, virtualized serving infrastructure throughout the entire process, from development, through QA, all the way to deployment, we can significantly accelerate innovation and eliminate complexities, and reduce or eliminate incidences that inevitably arise from when the dev and QA environments differ from production.

    Mobility: Software mobility to easily move software from one machine to another will greatly relax our SLAs for break-fix (because the software from a broken node can automatically be brought up on a working node), and that in turn reduces the need to physically move machines (because we can move the software instead of moving the machines).

    Security Forensics: When an app host is to be decommissioned, virtualization presents the opportunity to archive the state of the host for security forensics, and to securely wipe the data from the decommissioned host using a simple, secure file-wipe rather than a specialized, hard-to-verify bootstrap process. In sum, VMMs provide a uniform, reliable, and performant API from which we can drive automation of the entire host life cycle.

    Horizontal Scalability: Virtualization drives another very interesting and compelling architectural paradigm shift. In the world of SOA and global serving with unpredictable workload, we are better off running service tier(my view of tier is load balanced cluster of elastic nodes) across a larger number of smaller nodes, versus a smaller number of larger nodes. Large number of smaller nodes provides cost as well as horizontal scalability advantages. In addition, with a larger number of smaller nodes, when a node goes out, the remaining nodes can more easily absorb the spike in workload that results and new nodes can added or removed in response to workloads.

    Eliminate Complex Parallelism: My experience with multi-processing systems(SMP) has shown that effectively scaling software beyond a few cores requires specialized design and programming skills to avoid contention and other bottlenecks to parallelism. Throwing more cores at our software does not improve performance. It is hard to build these specialized skills to develop well-tuned SMP and indeed becoming a great inhibitor to innovation in building scalable services. By slicing large physical servers into smaller, virtual machines we can deliver more value from our investment.

    Cloud and Virtualization

    @JSchroedl: PRT @AndiMann: HV = no more than hammers PRT @sureddy: Virt servers don't matter.Cloud is a promise "Service" is what counts

    Cloud is a promise and Service is the fulfillment. The goal of the cloud is to introduce an orders-of-magnitude increase in the amount of automation in IT environment, and to leverage that automation to introduce an orders-of-magnitude reduction in our time-to-respond. If a machine goes down (I should stop referring to machines any more – instead I should start emphasizing SLAs), automatically move its workload to a replacement—within seconds. If load on a service spikes or SLAs deviate from the expected mean, auto-magically increase the capacity of that service—again, within seconds.

    Hypervisors (virtualization) are as necessary as hammers but not sufficient. What is needed is "End-to-End Service delivery. There is no doubt in my mind that IT is strategic to the business and if properly aligned with business goals, IT can indeed create huge value. Automation and End-to-End service delivery are key drivers for transforming current IT to more agile and responsive IT.

    Physical machines do not provide this level of automation. Neither the bloated VMs containing the cancerous OS images. What we need a clean separation of Base Operating system(uniform across cloud), Platform specific components/bundles, and then application components/configurations. While it is impossible to rip and replace existing IT infrastructure, this layered approach would help us to gradually move toward more agile service delivery environment.

  26. David O'Berry
    September 27th, 2009 at 16:11 | #26

    Surendra wrote:

    "Hypervisors (virtualization) are as necessary as hammers but not sufficient. What is needed is “End-to-End Service delivery. There is no doubt in my mind that IT is strategic to the business and if properly aligned with business goals, IT can indeed create huge value. Automation and End-to-End service delivery are key drivers for transforming current IT to more agile and responsive IT.

    Physical machines do not provide this level of automation. Neither the bloated VMs containing the cancerous OS images. What we need a clean separation of Base Operating system(uniform across cloud), Platform specific components/bundles, and then application components/configurations. While it is impossible to rip and replace existing IT infrastructure, this layered approach would help us to gradually move toward more agile service delivery environment.

    The entire comment was really solid but these two last paragraphs need to be emblazoned on tablets and passed out to people all over the place.

    One of the challenges, again, is that a number of folks really have a hard time not seeing certain technologies or combos of technologies as the "answer". We have been trained to seek silver bullets for problems when in fact that is rarely the case.

    Anyway, thanks for the comments by everyone already in this thread and to Hoff for the initial. I appreciate things that really help me think better.

    -David

  27. September 27th, 2009 at 22:02 | #27

    @Surendra Reddy

    (copied from initial response at Surendra's blog in response to his post)

    I see, in this, a tension between rock solid service definition, where the service is so well defined you could almost write an ASIC for it and just deliver deliver deliver…but there is a fly in the ointment, and its framework, tools, dependency and library drift–some of it bloat, but some of it required for application advancement. For instance Twitter originally wanted to be on Solaris, but they had to move because of tools/library availabilities elsewhere.

    So long as developer productivity is tied into the changing world of

    “Adding to this brew, mind boggling number of open sources libraries and tools crept into OS.”

    I see:

    “Sadly, instead of wiping out the cancer bits in the operating environment, all the junk packaged into VMs.”

    as being persistent? Maybe JEOS, and brilliant just in time ones such as Randy B suggested can clean up a lot of the bloat, but I do not believe JEOS can clean up all dependancies/drift/etc?

    So when you say:

    “We need paradigm shift in the way we architect and deliver “services”.

    And-

    “The Container layer will be the point of qualification on both sides: each new variation of hardware will be qualified against a single Container layer, and all software will be qualified (quite literally, providing a fast lane change mechanisms development, test, staging and production (Continuous Integration & Continuous Deployment) against that same Container layer.”

    My question is, how do you adapt to change? If a hot new must have developer tool/library/Swifter comes out you can either tell your developers, sorry not in our package plan (Google’s method) or play catch up once they build it in. Even a skinny container will have change management complexity.

    *

    Also are there chances to pick of certain parts of the service delivery and harder code them? @monadic from Rabbit MQ tells me you could actually make hardware/ASIC just for delivering his messaging software. Does anything like that start to come into play as you get to hyper-scale on an application?

  28. Andrew Yeomans
    September 28th, 2009 at 05:05 | #28

    I'll claim that you are both right and wrong here and on several levels 😉

    First, my big claim is that virtualisation has been popular because we (the industry) messed up application deployment a really long time ago. And it's a near miss – we managed to get multi-user applications to co-exist pretty well, sharing the CPU and hardware, so it was possible to run hundreds of applications on a single CPU and single operating system.

    But we messed up on the managability. We let, nay encouraged, applications to install themselves all over the place. In the early Unix systems, all parts of the file system (/use, /etc, /, …) could hold components of your application. When Sun introduced its diskless workstations around 1983, it succeeded in cleaning up some of that mess by forcing /usr to be read-only, so a single copy of program code could be distributed over the network to all the diskless workstations. But there was still the read-write parts of the applications, not all of which easily fitted into /var or /etc or /home.

    Microsoft had a brief chance to sort out where applications could install themselves, and could also get all read-write data stored in a common fashion in the registry. But seemed to abdicate responsibility to get the application developers to save their files in well-known places in standard ways, so the developers made maximum use of that, putting files all over the place and forgetting (or never knowing) the lessons from Sun. So Microsoft missed out on a great commercial opportunity to control what applications could be installed, and enforce the benefits of a single method with standard look-and-feel. And maybe even include the Linux-style ability (rpm/apt-get) to update all applications through a single command, not just their own applications. (And even updating their own apps was hard, with around 8 different installer systems used by Microsoft until the more recent rationalisation).

    Apple made a brave attempt to have applications install themselves by drag-and-drop of a single installer bundle which auto-magically contained all the component files. Still, not all apps follow the same way, and there's still that backward compatibility with Unix-style file systems.

    Whereas what we really wanted was what's shown in your picture. Several nice little applications, each in their own self-contained box, sitting neatly on top of the operating system. Which could be added or removed quickly as required.

    Whereas what we got was that application, sitting in a fuzzy box, embedding bits of itself all over the place, and maybe trampling on other applications or (even worse) the OS itself. So we don't have a neat way of just adding or removing an application. Or patching or swapping the operating system. Lots of work is going into managing VM images, to make it simpler to add or remove or patch appls and the OS without breaking the rest. And it all takes execution time, having to knit together all those apps and data files into a working configuration before it can start real work.

    So we moved to the bare metal hypervisors as a way of having a standard interface to work to. The i386 instruction set was pretty solid (especially if you don't use those go-faster stripe instructions on later models). So you can create a VM image that will work anywhere. Anywhere, that is, that allows you to squirt 4 GB of VM image, when you really just wanted 30 MB of application code. But hey, bandwidth is cheap nowadays ?!? And isn't it a great advantage to be able to run Windows Server, Red Hat Enterprise, and Solaris all on the same box (until one of those teams reboots the entire server, losing the work of the other teams.)

    And that's why we have a lot of bloat. When we could have had multiple apps, all running independently. But what about chroot jails, do I hear? They allowed separation of applications. True, but were not easily managed in the default configuration, took lots of skill and attention to get working. If they had been, perhaps we would be shipping round small chroot app packages today, rather than bloated VM images.

    Maybe we have a chance to re-do the engineering. Union filesystems can be used to rationalise the mess of apps embedding all over the shop. But the vested interests of suppliers of CPU cycles, memory and disk all work against lean-and-green optimised engineering solutions.

    My second claim is that your brick-wall picture is slightly unfair. It makes the OS look bigger than it really is. Whereas in reality the OS kernel code is really quite small, much smaller than many applications. The kernel's data buffers are also quite small; much less than those of my running Firefox, for example. The virtualised hardware device drivers can also be pretty small if the real work is done at the lowest level, and I'd expect to see a trend to hardware devices that only ever exist virtually – why try to emulate hardware quirks when you could implement a clean device? it's not as if you can take you VM WIndows image and run it on the bare metal, it's tied into those hypervisor-emulated virtual devices.

    But yes, you are right, there's a lot of baggage that comes with the OS, those startup and shutdown systems, interactive applications that are at best rarely used on a server, options to support all known combinations of options, crapware, etc, that bloat the image to a few GB in size. So JEOS makes a lot of sense, and I too would expect to see more of it.

    JEOS is good where we have "cloud server" applications, that only need resources of CPU/memory and network connections. If they don't talk to special hardware, they don't need the drivers. If all they do is move and manipulate data between the network and memory (whether RAM or disk, not that there's much difference now SSDs are gaining popularity), there's not a lot of operating system services they need. Certainly it helps for the app developers to not have to code up memory management, process and thread handling, file system handling, and network protocols. (Hey, we could even have Secure-JEOS which removes those insecure network protocols too!). There's potentially a host of shared libraries too, again to save reinventing the wheel, but I'll count those as part of the application-space running beside all those little app-boxes on the picture.

    "Cloud client" applications like virtual desktops don't fit quite so neatly, as desktops will need to pull in a lot more individual applications, hardware dependencies, libraries, customisation, so that bloat will be harder to remove, for all except the dedicated browser client.

    Maybe further development will blend JEOS and the hypervisor into one. Microsoft could pull this off for their own products, if they wanted to. I'm less sure of the incentives for other firms to manage a blended OS+hypervisor, whilst needing to keep it broadly in step with mainstream code. Since they can just take the mainstream code and remove large chunks to get traditional JEOS, a simpler process that keeps the right side of the copyright laws.

    Or maybe we reclaim the high ground by going for a completely different API interface. The Java, .NET, Ruby, Python, Smalltalk APIs could all be implemented by a differnet VM architecture. Let the (single) operating system manage such application layers and provide the multi-tasking facilities. And let it run on whatever processor family, x86, ARM, MIPS, SPARC or that new optimised .NET common runtime CPU. Bringing back the 30-year-old excitement of optimising the CPU architecture for what was being run.

    Interesting times…….

  29. September 28th, 2009 at 09:13 | #29

    Constructing a just-enough OS for each type of application stack has a lot of merit. For instance Java is already designed to abstract the lowest common denominator of OS features. Preparing a custom distro design to run only Java apps could bring a significant savings.

    I experimented with Ubuntu Server Edition JeOS for a while running a Java web server and it worked great, however it was still very heavy (300MB on disk). If you know your virtualization environment ahead of time you can trim out all of the drivers except the ones for the virtual devices for the given platform. That alone saves about 70MB. If you start doing more extreme cutting (like switching core utilities with busybox, replace libc with something lighter and cut out legacy services) you can get down to sub 50MB. But that still seems heavy, and something like VMware tools bloats it up (over 100 MB extra). It would be great if we could go further and create something as thin as the virtualization layer under and application layer over rather than this 'FAT' column in the middle. Something in the 20MB range would approach sanity and audit-ability.

    In any case, it does seem like a great way to reduce the amount of maintenance and attack surface (a little chemo for the cancer). It would take a little air out of the squeezed balloon but the application stack running onto will still go POP at some point.

    One concern I do have with this approach is that it may reduce or eliminate the amount of host-based controls you can run (Especially with custom kernels). As you have said, until we have support for alternatives like virtual appliances in public cloud services host-based does play an important role.

  30. Gary Mazz
    September 28th, 2009 at 10:57 | #30

    Crappy resource management is still crappy resource management whether you are talking about programming, IT hardware, inefficient programming languages (interpreted) , data storage or your warehouse inventory.

    We all got into the habit of living "fat" when the economies could carry IT inefficiencies and tolerate purchasing hardware assets without a "real" way of determining the efficiency of the capital expended for the resources. Because customers couldn't account for their purchases and didn't push back on OS vendors and application software suppliers, we ended up with over bloated operating systems and programming languages that execute 4-8 times slower than native binaries. No one cared when the cost of over building IT (for peak demand) and over buying IT was hidden by annual purchasing cycles and 3 year asset depreciation.

    Now that we have to sign a check every week, we are cringing at the idea of paying for assets we don't need. Calling the bloat we were all happy to accept a cancer.

    3 years ago, when you purchased a 4 core server for $25k and paid an additional 15% annually for 4 hour service, that server cost you $36K or $1000/mo. So we were happy paying $250/mo for a single cpu. We'll ignore the additional 25% for the cost of life support for the server.

    But here you are crying about the costs of storage for the operating system, but you are still rolling out java like its the best thing since sliced bread.

    You are paying for work produced by a cpu. The greater the inefficiency, the more it costs. IMO, the greatest fraud perpetrated on the computing (server) community was java. The idea was great, convince programmers to use a programming language that initially had less capability than C++ on the hollow promise of write once run anywhere. As it turns out, it was execute at 1/4 the speed so we needed to purchase more CPUs and servers.

    If the amount of time an money was expended on C++ as java, there would have seen greater IT application efficiencies and we may not have been talking about cloud computing for another 10 years.

    The second issue here is, easily recognized by anyone that has been in the server business for more than 25 years, servers are now being treated like work stations. Who ever heard of putting the gnome desktop on a enterprise class database or middleware server. Servers used to be treated more like embedded systems, whatever was required to make the server and single application run was on the server, everything else was removed. Then the IO and memory was tuned to maximize application efficiency and reliability.

    The cloud will continue to bloat software with wild and new algorithms and software packages to sell that promise to improve cost efficiencies.

    Sometimes less IS more.

  31. September 29th, 2009 at 17:44 | #31

    Vague pondering on the subject:

    It's hard to picture computing without the OS behemoth. Almost from day one of generalised computing it's been there in some form or another to act as the interface between the hardware and software, to stop programmers from having to worry about how to talk to different bits of hardware and just provide them with a standardised interface. It's done a good job of that too.

    Every now and then hardware has come along that's thrown all that out of the window, e.g. Graphics accelerators, and sound cards with their own proprietary interface and drivers that did their own fancy stuff, and you'd go through a period of time where developers had to write for target hardware. Eventually someone would come along and re-invent the wheel by providing a common interface to access hardware (e.g. DirectX, OpenGL) and developers would breathe a sigh of relief and get back to what they feel they do best. It is still another layer of bloat though.

    I may be talking out my arse here, but about the only way I can think of to drastically reduce the bloat is for something fairly shocking, but also stifling, to happen at hardware level. Set up standards for communication with the devices directly, and at best have "OS on a chip"?

    It'd be great in some regards if every piece of hardware responded consistently to exactly the same instructions / procedures. The negative aspect is that it would stifle innovation as companies would have to get the standard approved and implemented before any new specialised hardware could be used.

    @Tadd Axon

    I see mainframes as essentially what we're getting with services like Amazon's EC2 and especially SliceHost. Mainframes were always traditionally allocated CPU time slices, certain departments would get more priority over timeslices than others. When you buy your cloud server you're getting xx% of the CPU time effectively.

    Looking at it from that angle, and that of this great blog, consider how inefficient this cloud concept is compared to mainframes. In the old mainframe days each department would never have to load it's own OS as part of it's CPU timeshare, just the program.

    I'm not a huge fan of chroot jails myself. There seems to be a complete misconception about how secure they really are. The sad truth always was (not investigated OpenVZ or Solaris Zones) that no matter how good your jail, people can still break out in unexpected ways.

    Call it Sysadmin paranoia, but I operate on the principle that if someone you never gave access to gets to the command line of your system, no matter where or how, it's game over. No matter how well you keep your server patched, sods law has it there will be an unpatched or hitherto unknown vulnerability somewhere that will enable someone to gain access rights you don't want them to. It's one of the main reasons why Sysadmins preach "every service on it's own system." As much as that's resource inefficient it's the only secure way (currently.) VMs are great in that they help reduce the impact of that inefficiency, but they still have a long way to go themselves!

  32. LeitM
    September 29th, 2009 at 18:06 | #32

    There is a huge installed base of applications that are not going away, will not be rewritten and are often left untouched. This installed base relies on an OS. While you can argue that we don't need two supervisors inside a physical machine (OS inside the VM and the hypervisor), and that the OS is bloated, the OS is the entity which helps keep the installed base of software up and running.

    Would this change in the future? Maybe – but it is going to take time (while more applications are being built to run on OSes :-))

  33. September 29th, 2009 at 20:30 | #33

    I like and endorse this discussion very much. I like to add two comments, one about the initial raised issue of OS (as we know it today) is overloaded and a second to address services.

    1. The overloaded OS.

    In my eyes one of the biggest misconception from Client/Server computing was/is the idea that a (Personal) Computer can not only be use but even installed and configured by everyone. This is just not true. If you compare an End-user computer with any other facility at your kitchen, than in the specialized store for kitchen supplies you would only find one (or let's say three) architectures. With some "configuration, you can make it a fridge or a dishwasher or an oven, no problem. In fact, technology wise, it would be possible. In reality we don't see it because it is not practical and would me to much overload to the system to fulfill its core needs.

    This misconception of every one can manage and configure a personal computer during the early 90ies was adapted to Servers as well. You do not need to be a specialist anymore, anyone can configure a Windows server, it's just Windows… The second paradigm was adopted as well, one fits all and that's the best (and cheapest) way to do. Looking into my kitchen, I have my doubts…

    I really enjoy the advantage of a GUI for me as an End-User or administrator. But of the Server there is no advantage and why every server Ops today has to carry its GUI interface?

    All I'd need is an orchestration that helps me stupid manager of the system to do my job but is does not need to be part of the server itself.

    2. The advantage of Service

    The advantage of “Services” single, isolated units that are specialized of doing a minimum of tasks, relying on other services they consume and receive inputs from and provide outputs to other services or as a HID service to the user, is known since the beginning of IT. Even far before IT, Henry Ford implemented this concept for the first time to build cars.

    In IT we had several circles of building abstraction layers, the introduction of a OS was one of them, procedural programming languages have been another one, 5GL, J2EE, SOA Services, Cloud. All the time the same concept, sometimes just on a different layer. Looking back to the last 40 years of IT, the question is: why did this concept not mature but is seen as something new and mind shifting every some years?

    I see the reason in the issue between innovation and needed individualism on one side and need for Governance and cooperation on the other achieve efficient outcomes from services. This will be the core challenge for Cloud as well as of hypervisors becoming light-weight OSes facilitating “Just enough OS” to serve the application on top. Unfortunately I don’t see an approach nor even a discussion that addresses this.

  34. September 30th, 2009 at 03:17 | #34

    First, you could publish a book with these comments. Dang.

    I keep thinking about how much more VMware people could buy if they didn't have to pay for Windows.

  35. Mark
    October 1st, 2009 at 16:07 | #35

    @Tadd Axon

    No doubt the mainframe VM OS model is better in that it is fully virtualized, but with tight integration between the host and guest, primarily because it is exactly the same platform and OS as host and guest.

    As for BSD Jails making a comeback, Parallel's Virtuozzo and Sun's Solaris Containers have both been around for years, and IBM has Workload Partitions in AIX 6.1.

    What is really needed is for VMware to build a Linux JEOS with tight integration and two-way reporting with ESX. The goal of the integration would be to prevent resource scheduling conflicts. The JEOS must provide robust process management (threading, process restart, etc.) to the middleware. The idea that you restart a VM to solve a problem is as stupid as "crash and reboot" RISC/UNIX SMP systems of the 1990s. VMs are going to get big (in VCPU count), and run big apps (with dozens of running processes and hundreds of running threads).

  36. October 2nd, 2009 at 02:08 | #36

    @needcaffeine

    welll then maybe the future is better designed/controlled virtual applications (wholly inclusive to themselves) thus you're on 1 OS w/ multiple virtual apps. Theoretically you could have a windows server with Exchange, SQL, IIS, Sharepoint all running; but in reality they don't play nice with each other & you can't control their resources like with a VM.

  37. John
    October 27th, 2009 at 10:10 | #37

    This is a great discussion, and I've been grousing for years about OS/Application bloat, the failure of UNIX and Java to truly be write anywhere/play anywhere, and just plain old wasteful coding 'because the cycles and memory are available, and we get paid by the LineOfCode'.

    VM's shouldn't go away though – they have their value, IMHO (forensics/snapshot recovery/test environments/maximizing resource use of idle cycles).

    Whatever happened to real modularity? I'm not a programmer or kernel architect, but we've all seen clean API's that are optimized for efficiency/performance.

    How big is the OS on an AT&T 5ESS switch? Not very, if I recall; That's why I've begun tinkering with the BSD's again – they're lean.

    The notion of shared libraries is a double-edged sword – Conserves space, but creates too many dependencies (gotta have 'this' if you want to install 'that'). That's where you folks are dead-on about clean abstraction and interfaces to HW. Write the libraries lean/clean, make 'em portable, and have a clean API to the OS (which shouldn't go away – JEOS is just what's needed).

    Oh yes – And consistent, accessible logging (a 'la MITRE CEE, another hopeful….).

    I guess I'm just an idealist who continues to believe that someone is going to get fed up and solve the problem. I'm going to start keeping tabs on the notion of JEOS – Its time has come, and I sense the winds of change starting to blow (again, idealism creeping in….)

  38. Tross
    January 4th, 2010 at 08:37 | #38

    @beaker

    freeDOS makes a comeback :-). Seriously though, even Linus has been quoted saying the linux kernel is bloated. Consider the scenario painted above – if each VM only has to run a single "application" or , in fact, application component, then what does the component really need? We definitely need an abstraction layer for file systems (about all DOS did :-)) and other I/O i.e. TCP/IP, and we need a thread of control. DOS only had one thread which is sufficient for many kinds of applications.

    Back in the day, I used to use TSR's (remember that!) to run my ISR to simply do minimal processing during an interrupt to put stuff on a queue. Then a mainline thread would serially pull from the queue and process. Not far off from many server frameworks today :-). It wasn't much to even provide lightweight threads using the timer interrupt. I bet we could have a pretty lightweight "DOS" with truly JeOS to run Apache using the first 64KB of RAM 🙂 in a given VM.

    What does this buy me? IMHO, it's just one of many options that should be available for application developers. Some applications like DBMSs, for example, may choose more fully functional OSes, because they need them. Others applications may choose to take matters into their own hands that much more and build their own plumbing. In fact, that was one of the nicest things about DOS – you could easily go around it like it's not even there.

    p.s. – when you did go around DOS, chances are you had to rely on BIOS calls just to help normalize access to the underlying hardware. Maybe that's what really need – a better BIOS as the lowest possible layer on which to develop our applications. Optionally, we can take more to get more – like DOS to give us a filesystem. Perhaps an optimized linux kernel for giving us some modern filesystems? Next, maybe a more typical JeOS style linux?

    Options are fine – but what does this buy me??? Simplicity. IMHO, that's the best way to reduce attack surfaces, reduce bugs, improve performance and availability, etc. Simpler==better.

    Let's see if we can kick off a new branch of freeDOS that can run all the popular open source languages and filesystems etc. 🙂

  39. April 16th, 2010 at 02:32 | #39

    The OS is there to interact with the HW so the apps don't have to. True, systems would run better if we didn't have an OS, but good luck getting all of these applications working with numerous hardware platforms, options, drivers, components, etc.

    Now, if we increase the size/scope of the hypervisor to get rid of Windows, we are just building a new OS that the applications have to write to. We haven't solved anything. I do agree that we need a Minimum OS, in that it only installs components we need for our applications/components. That would allow us to get rid of all of the crap.

  40. john
    October 14th, 2011 at 01:54 | #40

    I'm guessing you are not familiar with the map
    educe pattern, hdfs, and the scale-out concept at all. this is the solution we are looking for. we just need to learn how to code for it, and change our concept of what a machine really is. there are no silver bullets, no way to automagically make our software efficient. Why? In four words: the Church–Turing thesis. We have a cap on computation and it's been known for 60 years now. you can't expect to solve some of the known problems in a polynomial proportion to their input. so yes, you'll need a scaling machine(s), you'll need to learn to distribute computations and workloads because there is no other solution. just like the way that mankind had built the pyramids- cooperation.

  1. September 27th, 2009 at 18:26 | #1
  2. September 28th, 2009 at 14:05 | #2
  3. September 29th, 2009 at 12:18 | #3
  4. October 7th, 2009 at 00:26 | #4
  5. October 12th, 2009 at 05:35 | #5
  6. October 17th, 2009 at 20:48 | #6
  7. October 19th, 2009 at 12:48 | #7
  8. November 12th, 2009 at 20:44 | #8
  9. February 4th, 2010 at 12:00 | #9
  10. April 12th, 2010 at 12:01 | #10
  11. May 26th, 2010 at 08:48 | #11
  12. March 21st, 2011 at 15:07 | #12
  13. September 23rd, 2011 at 15:44 | #13
  14. February 6th, 2012 at 14:19 | #14
  15. November 26th, 2012 at 13:35 | #15