Home > General Rants & Raves > TL;DR But My Virtual Machine Liked Me On Facebook Anyway…

TL;DR But My Virtual Machine Liked Me On Facebook Anyway…

September 2nd, 2012 Leave a comment Go to comments

I usually don’t spend much time when I write a blog, but this was ridiculously difficult to write.

I’m neither a neuroscientist or a computer scientist. I’ve dabbled in AI and self-organizing maps, but I can barely do fractions, so every sentence of this blog had me doubting writing it. It’s probably shit, but I enjoyed thinking about it.

The further I tried to simplify my thoughts, the less cogent they became and what spooled outward onto my screen resembled more porridge than prose.

That said, I often feel stymied while writing. When someone else has crystallized thoughts to which adding commentary seems panderous, redundant, or potentially intellectually fraudulent, it feels like there’s no possible way that my thoughts spilling out are original, credible, or meaningful.

This is especially the case as when brilliant people have written brilliant things on the topic.

“On the shoulders of giants” and all that…

Skynet, The Matrix, The Singularity, The Borg…all of these examples popped into my head as I wrote, destroying my almost sensical paragraphs with clumbsy analogs that had me longing to reduce my commentary to nothing more than basic Twitter and Facebook-like primitives: “< +1″ or “Like.” It was all just a big pile of fail.

The funny thing is, that’s actually where this story begins and why its genesis was so intriguing.

Alex Williams wrote an article titled “How Machines Will Use Social Networks To Gain Identity, Develop Relationships And Make Friends.

He offered up a couple of interesting examples from some conceptual “demos” from last week’s VMworld.  I re-read the article and found that the topic was profound, relevant and timely.

At its core, Alex challenges us to reimagine how “machines” — really combinations of infrastructure and applications that process information — might (self) identify, communicate, interoperate, organize and function as part of a collective construct, using a codified language that mimics the channels that we humans are today using in social patterns and grafs that define our relationships online.

The article wobbled a bit with the implication that machines might “feel,” but stripping relevant actions or qualitative measures such as “like” or “dislike” down to their core, it’s not hard to imagine how machines might evaluate or re-evaluate relationships, behavior and (re)actions based on established primitives such as “good,” “bad,” “available” or “malfuctioned.”

I know that’s how my wife generally thinks of me.

Frankly, it’s a simple concept. Even for humans. As an intelligently-complex species, humans define even heady things like emotional responses as a function of two fundamental neurotransmitters — chemical messengers — the biogenic amines serotonin and dopamine. The levels of these neurotransmitters are normally quite reasonably regulated but can be heightened or depressed based on the presence of and interaction with other chemical compounds. These neurochemical interactions may yield behavioral or even systemic immune system responses that manifest themselves in a variety of ways; from happiness to disease.

One might imagine that machines might likewise interact and form behavioral responses to, and thus relationships with, other groups of machines in either like-minded or opposing “clusters” using a distilled version of the very “activity streams” that humans feed into and out of using social media, defined by the dynamic, organic and chaotic social graph that ties them.

[I just noticed that my friend and prior colleague Mat Matthews from Plexxi wrote a blog on "affinity" and described this as Socially Defined Networks. Brilliant. ]

I’m sure that in some way, they already do. But again, I’m hung up on the fact that my NEST thermostat may actually be out to kill me…and tweet about it at an ecologically sound point in time when electricity costs are optimal.

The notion that machines will process these activity streams like humans do and act on them is really a natural extension of how today’s application architectures and infrastructure designs which utilize message buses and APIs to intercommunicate. It’s a bit of a re-hash of many topics and the autonomic, self-learning, HAL-9000 batshit crazy compute concepts we’ve all heard of before.

On Twitter, reacting to what he sensed as “sensationalism,” Thomas Lukasik (@sparkenstein) summarized my assessment of this concept (and thus rendering all these words even more useless) thusly:

“…my immediate response was that a “social network” is an ideal model 2 take advantage of N autonomous systems.”

My response: +1 (see what I did there? ;)

But what differentiates between the human social graph and the non-kinetic “cyber” graph is the capacity, desire and operational modality that describes how, when and why events are processed (or not.) That and crazy ex-girlfriends, pictures of dinner and politicial commentary.

I further addressed Thomas’ complaint that we’d seen this before by positing that “how humans are changing the way we interact will ultimately define how the machines we design will, too.”

To wit, machines don’t necessarily have the complexity, variety, velocity and volume of unrelated stimuli and distractions that humans do. We have more senses and we have fuzzy responses to binary responses. They are simpler, more discrete “creatures” and as their taskmasters, we enjoy a highly leveraged, somewhat predictable and reasonably consistent way in which they process and respond to events.

Usually until something kinetic or previously undefined occurs. Then, the dependency on automation and the ability for the discrete and systemic elements to “learn,” adapt, interact and leverage previously unrelated relationships with other nodes becomes important.  I wrote about that here: Unsafe At Any Speed: The Darkside Of Automation

What’s really relevant here, however,  is that the “social graph” approach — the relationship between entities and the policies established to govern them — can help close that gap.  Autonomous is cool.  Being part of an “autonomous collective” is cooler. As evidence, I offer up that scene with the peasants in Monty Python’s “Quest for the Holy Grail.”

In fact, if one were to look at computer networks, we’ve seen the evolution from centralized to distributed and now hybrid models of how the messages and state between entities are communicated and controlled.

Now, take a deep breath because I’m about to add yet another bit of “sensationalism” that Thomas will probably choke on…

The notion of separating the control, data and management planes that exist in the form of protocols and communication architectures are bubbling to the surface already in the highly-hyped area of software defined networking (SDN.)

I’m going to leave the bulk of my SDN example for another post, but bear with me for just a minute.  (Actually, this is where the blog descends into really crappily thought out rambling.)

If we have the capability to allow the applications and infrastructure — they’re both critical components of “the machine” — to communicate in an automated manner while contextualizing the notion that an event or message might indicate a need for state change, service delivery differences, or even something such as locality, and share this information with those who have a pre-defined relationship with a need-to-know, much goodness may occur.

Think: security.

This starts to bring back into focus the notion that like a human immune system, the ability to identify, localize and respond, signalling to the collective the disposition of the event and what may be needed to deal with it.

The implications are profound because as the systems of “machines” become increasingly more networked, adaptive and complex, they become more like living organisms and these collective “hives” will behave less like binary constructs, and much more like fuzzy communities of animals such as ants or bees.

If we bring this back into the teeniest bit more relevant focus — let’s say virtualized data centers or even (gasp!) Cloud, I think that collision between “social” and “networking” really can take on a broader meaning, especially within the context of how systems intercommunicate and interact with one another.

As an example, the orchestration, provisioning, automation and policy engines we’re deploying today are primitive. The fact that applications and infrastructure are viewed as discrete and not as a system further complicates the problem space because the paths, events, messages and actions are incomprehensible to each of these discrete layers.  This is why we can’t have nice things, America.

What’s coming, however, are really interesting collisions of relevant technology combined with fantastic applications of defining and leveraging the ways in which these complex systems of machines can become much more useful, interactive, communicative and “social.”

I think that’s what Alex was getting at when he wrote:

…points to an inevitable future. The machines will have a voice. They will communicate in increasingly human-like ways. In the near term, the advancements in the use of social technologies will provide contextual ways to manage data centers. Activity streams serve as the language that people understand. They help translate the interactions between machines so problems can be diagnosed faster.

By treating machines as individuals we can better provide visualizations to orchestrate complex provisioning and management tasks. That is inevitable in a world which requires more simple ways to orchestrate the increasingly dynamic nature for the ways we humans live and work with the machines among us.

Johnny Five is Alive.

Like.

Enhanced by Zemanta
Categories: General Rants & Raves Tags:
  1. September 2nd, 2012 at 00:37 | #1

    >> “Machines don’t necessarily have the complexity, variety, velocity and volume of unrelated stimuli and distractions that humans do.”

    Which is precisely why I think that a successful “cyber-social network” would require autonomic systems. A system would have to have enough intelligence to make it beneficial for another system to “follow it”, want to pay any attention to what it “posts”, and be in a position to understand and leverage that information to some advantage.

    TJL

    • beaker
      September 2nd, 2012 at 11:51 | #2

      Agreed!

      Along with dealing with velocity, volume and variety, there needs to be an element of “veracity” — or how to trust
      the feed. See also: spam, SNOPES and mindless retweets in Twitter today…

      /Hoff

  2. Dave Walker
    September 2nd, 2012 at 04:50 | #3

    I need to go and read Alex Williams’ article (and will, shortly), but as a first thought, this brings to mind one of the “trick axioms” I use when trying to come up with new ideas: “a computer is the world’s stupidest user”.

    To put it another way, “for any system involving people interacting with computers or other people, using computers as a communications medium, replace the people with more computers and see if the context fits or anything else interesting comes to mind”.

    (Aside: computers already have identities, and in ways which are surprisingly mappable to human ones; see a table I put together at http://ctovision.com/2012/04/identity-unbound/ ).

    In terms of computers “feeling”, this gets me looking more in the direction of threat modelling and risk assessment, and how different services might be viewed by a computer based on where / what they are being offered by. Interestingly, as risk perception is something that we humans are notoriously bad at (see eg lots of Schneier), computers may well be better at it, provided they have a sufficiently detailed world model.

    Sci-fi-style AI self-awareness wouldn’t be necessary to see benefits from this, but it would probably take a hell of a system to get decent results, even in a fairly constrained world with a small set of choices – external factors would always bleed in.

  3. beaker
    September 2nd, 2012 at 11:54 | #4

    @Dave Walker

    Cool, Dave.

    I generally think about phrasing this sort of thing as (id)entity – delineating between human/machine interfaces.

  4. September 2nd, 2012 at 14:36 | #5

    >> “Sci-fi-style AI self-awareness wouldn’t be necessary to see benefits from this..”

    I’m not sure that’s entirely true, @Dave. If “self-awareness” is missing, then where’s the “social” aspect — at least the “self-centered” one that we know something about?

    Members of a social network write about *themselves*, and subscribe to what others write about *themselves* — with lots of bathroom mirror *self* portraits thrown in.

    I believe that a sense of “self” is essential for “socializing” — and essentially what an autonomic system has.

    TJL

  5. September 4th, 2012 at 09:26 | #6

    Cool. Will infrastructure optimization need SOA/mash-ups more than apps? Perhaps. A customer recently described the evolution of a similar process to me as “the awkward phase phase between human speed and machine speed transactions” Add in a VM that can seek out what it “desires” and an infrastructure that can properly fend off and prioritize it’s own capabilities.

    I can see it now “Oh, hello Mr. Server, don’t mind me, I’m just a poor little VM in need of only a few cycles. Like me?” permission granted… “Nom nom nom I am sucking your cycles, I am the vampire VM”

    /Abner

  1. September 2nd, 2012 at 13:45 | #1
  2. September 3rd, 2012 at 09:53 | #2
  3. September 3rd, 2012 at 14:31 | #3
  4. September 11th, 2012 at 20:03 | #4
  5. September 12th, 2012 at 08:42 | #5