d o w n s t r e a m

what happens downstream?

Subscribe to this blog's feed

Tweets

    follow me on Twitter

    blogroll

    • Dave Douglas
    • Lawrence Lessig
    • Triple Pundit
    • Tim Bray
    • Gil Friend
    • Simon Phipps
    • Joel Makower
    • Jamais Cascio
    • Cameron Sinclair
    • The Long Now

    Recent Posts

    • The Next Chapter: Rattle the Tech Industry
    • NetBeans 6.8, Kenai, and Immutable Service Containers
    • Green Grid - Data Center Pulse Collaboration: A Model for Industry
    • I Smell the Future
    • Building Change
    • "... Get the Client to Not Use Corporate IT ..."
    • Eco Responsibility is Fiduciary Responsibility
    • OpenSolaris Server !Desktop - How to Minimize OpenSolaris
    • Voilá! DrupalCon Paris Set for Sep 1-5
    • The Sustainability Challenge: Can the Internet Help?

    sphere

    • Books on Long-Term Thinking
    • The Long Now Foundation
    • downstream tagged links
    My Photo

    Mega Data Center: Seeing is Believing

    Over the past few months I had heard exclamations of amazement regarding a storied new data center in the Nevada desert called SuperNAP. I was a bit skeptical of the superlatives about scale and efficiency that embellished these stories. My skepticism turned to exuberance last week when I joined a group of architects from Sun for a tour.  The goal of our tour of this Mega Data Center was to see first hand the state of the art as implemented by Switch Communications, where Sun operates it's cloud computing business.

    Switch, and it's customers, which include several operating units within Sun, are beneficiaries of the collapse of Enron. The former utility giant had designs on trading network bandwidth using models similar to their energy trading systems. When Enron's flimsy financial structure gave way, their financial backers and the U.S. government stepped in to auction these assets. Switch CEO Rob Roy was the only one that showed up at the auction block.  In an uncanny twist of fate, he managed to side step what could have been a formidable bidding war to control this hub of communication that is unparalleled in North America.

    Here are the vital stats that only begin to describe the phenomenal facility that Switch has managed to assemble:
    • 407K square feet of data center floor space
    • 100 Mw of power provisioned from two separate power grids
    • Fully redundant power to every rack, backed by N+2 power distribution across the facility
    • Enough cooling and power density to run at 1,500 watts per square foot (that's 10x the industry average 150 watts).
    • 27 National network carries
    This describes the capacity of the SuperNAP, which is just one of the eight facilities operated by Switch within a 6-7 mile radius in a no-fly zone south of Las Vegas.

    Sun to Reveal Cloud Plans Tomorrow

    Some details of Sun's Cloud Computing business will be revealed tomorrow (March 18) in New York at the CommunityOne East event.


     Additional Resources

    • The Register article on Sun's cloud in the SuperNAP
    • Another article in The Reg with pictures of the SuperNAP
    • CommunityOne site
    • Cloud Computing at sun.com

    March 17, 2009 in Cloud Computing, Sustainability | Permalink

    | Digg This | Save to del.icio.us |

    How does Sun work with Drupal?

    Organizers of DrupalCon DC,  in the lead up to the event, asked Sun and the other event sponsors a few questions about their relationship with Drupal.  The first question was:

    How does Sun work with Drupal?

    I provided this answer:

    There are too many ways in which Sun works with Drupal to list them all here, but some of the highlights are:

    • In 2005 Sun donated a server to drupal.org when scaling and performance problems were hampering growth
    • Then in 2007 Sun donated another server to further propel the scale up of d.o
    • Sun has used Drupal in building several important online communities:
      • TED Prize winner Architecture for Humanity's Open Architecture Network
      • Freshbrain - a technology exploration platform for youth
      • the Open Office extensions site
      • the Sun Learning eXchange
    • Plus many organizations run Drupal on Sun technology
    • Employees at Sun have been using Drupal internally for lots of things, including:
      • as representative AMP stack environment and workload for lots of benchmarking and performance testing
      • to demonstrate certain technologies, both in Sun internal training, and with Sun's customers, e.g., using the Webstack pre-built bundle of AMP and other open source packages for OpenSolaris, Virtualization with Solaris zones, MySQL installation and tuning, and PHP code analysis with DTrace.
    • Sun has created a NetBeans plugin wizard for developing Drupal modules
    • Sun is currently investigating system appliances for data archive and CMS that would bundle Drupal
    • Of course, Sun is a large contributor to some of Drupal's most critical underlying open source technologies:
      • MySQL AB is a Sun company
      • many PostgreSQL core contributors are employed at Sun
      • many PHP core contributors are employed at Sun
      • Sun has contributed more FLOSS code than any other single institution*. (Though, we think Solaris is technically a great choice for Drupal.)
    • and let's not forget that Sun VP and inventor of Java, James Gosling, defended Drupal inventor Dries Buytart in his PhD examination at Ghent.

    Clearly, Sun is deeply connected to the Drupal community in many ways.

    All the Q&A from DrupalCon's sponsors will be posted here on the DrupalConDC site.


    * Entry corrected to say Sun has contributed more FLOSS code than any other single institution, not Sun has contributed more code to the Linux kernel than any other single institution (although I think I did read that somewhere, it's not substantiated in this paper).  Thanks to Matt for point that out - it's major difference.


    February 24, 2009 | Permalink

    | Digg This | Save to del.icio.us |

    Rural Rwanda's Budding Healthcare System

    Ordinarily, my Inbox is full of anything but heartwarming emails, but last week I was gratified to receive a photo of four Sun Rays atop health worker desks in Butaro, Rwanda. The photo, sent by Erik Josephson of the Clinton Foundation's HIV/AIDS & Malaria initiative, shows the pilot setup for rural health clinics that former president Bill Clinton envisioned when he made his TED 2007 wish to ...
    "... build a sustainable, high quality rural health system for the whole country."
    - Bill Clinton, March 2007

    Sun provided these Sun Ray 2's and the supporting servers as part of its commitment to support the TED Prize that year. To actually see the Sun gear in situ makes all the planning and logistics and weekly conference calls over the past 18 months suddenly all worthwhile. The site in the photo is one of two pilot locations in which the infrastructure will be tested in live clinical situations over the next month. Pending the results of the pilot, this model infrastructure will be rolled out to an additional 70+ clinics and hospitals across rural Rwanda.

    The pilot phase of the project, currently being administered by Partners in Health and The Clinton Foundation is set to begin in the villages of Butaro and Kinoni on Monday. The systems infrastructure, comprised of the Sun Ray 2, Sun Fire X2100 server, and Solaris OS, were selected by the project steering committee to serve up the healthcare worker desktop environment. The selection criteria reflected the goals of the project as well as the relatively austere conditions where the healthcare facilities are located:

    • Electricity is scarce and not terribly stable in rural Rwanda, so the Sun Ray 2, which consumes about 4 watts and is an entirely stateless device, is a good fit for the workstation. Attached to each Sun Ray is a low power 15" display which brings the total power consumed by each workstation to less than 25 watts. On the server end, the X2100 is the lowest power server available from Sun. The total electricity demand for the primary IT infrastructure in a typical clinic - 7 workstations (Sun Ray and display), 1 server, 1 network switch - is less than 500 watts.
    • These facilities do not have extensive protection from the heat and dust that are common in Rwanda's rural villages, so reliable systems that will hold up to extremes is important. The Sun Fire X2100 is a reliable workhorse with good serviceability. Combine that with the Sun Ray's zero moving parts (except keyboard and mouse) and you have about the most reliable setup possible. Every clinic and hospital will inventory one spare Sun Ray, so if one does fail it's a simple replacement to put that workstation back into service - no installation or configuration required. Just attach it to the network and you're back to treating patients. Spare X2100 servers will be inventoried in Kigale, so a server failure will require that the replacement be dispatched to the facility for replacement.
    • Rwanda is a fledgling economy. The ICT infrastructure upon which the healthcare system and other critical social services are built must be sustainable and low cost. Any dependence on proprietary commercial products would effectively impose a tax on growth and leave Rwanda's infrastructure at the mercy of foreign commercial enterprises. So, wherever possible, free and open source products were chosen. The Solaris operating system, the Gnome desktop environment, and the Open Office productivity tools fit the bill, and nicely complement the medical records software to be used in these clinics, OpenMRS.

    OpenMRS and Africa's Health Workforce

    OpenMRS is an open source application, written in Java, that was conceived by Paul Biondich at the Regenstrief Institute. It is designed expressly to address the need for electronic medical record keeping in the developing world, but also to serve as a framework for building generalized medical informatics systems. The Rwandan government selected OpenMRS as a key component of their healthcare scale up effort.

    A few countries in sub-Saharan Africa, Rwanda among them, have set a long range vision for national economic and social advancement. Vision 2020 is Rwanda's development strategy to achieve or even surpass developed world standards for national government, rule of law, education and human resources, infrastructure, entrepreneurship, and agriculture. OpenMRS and the supporting open source and energy efficient technologies from Sun contribute toward the infrastructure as well as the human resource goals of the plan. These tools will help to expand the pool of workers capable of delivering critical forms of healthcare by providing a standard protocol and reference resources to community members and paraprofessionals employed in health worker roles, thereby alleviating dependence on highly trained medical practitioners for routine diagnosis and procedures. Estimates from the development community and the UN indicate it would take more that 20 years for sub-Saharan countries to reach the 2.5 health workers per 1,000 people ratio to be consistent with UN targets, assuming they even had sufficient training capacity to matriculate that many doctors and nurses. Instead, OpenMRS helps to change the equation and make it possible to expand health care services much faster than would be possible in a traditional public health model.

    In essence, the four workstations in the photo represent a new model health system, not only for Rwanda, but potentially for many other countries in the developing world.


    Related Reading:

    • Can Rwanda's Vision 2020 Succeed? post on WorldChanging.org
    • OpenMRS FAQ
    • Sun Ray in Health Care (and other Sun Ray success stories)

    February 17, 2009 | Permalink

    | Digg This | Save to del.icio.us |

    Help Yourself to Some OpenSolaris on EC2

    In the maelstrom of preoccupations that kept me awake last night, self-service in the cloud was a strangely prominent theme. A sad commentary on my slumber time, I know, but it was eerily coincident with news of OpenSolaris freed from a special registration process - when I woke this morning I found this announcement in my Inbox:

    News Flash for Our OpenSolaris 2008.11 on Amazon EC2 Users!

    We are happy to inform you that the latest OpenSolaris 2008.11 Base AMIs on Amazon EC2 in the US and Europe are now available to you and your users with no registration required! Please stay tuned for more OpenSolaris 2008.11 AMI stacks coming soon for you to quickly access. The registration process for pre-OpenSolaris 2008.11 AMIs is still in effect.

    For your reference, here are the AMI IDs:
    OpenSolaris 2008.11 (US) 32-bit AMI: ami-7db75014
    OpenSolaris 2008.11 (Europe) 32-bit AMI: ami-6c1c3418

    To read about what's new in OpenSolaris 2008.11, please visit the OpenSolaris Web site.
    OpenSolaris on EC2 had been available for months, but it was cloistered behind a registration process that involved waiting for a human to get back to you with approval of your request. But no more. Now OpenSolaris on EC2 is a first class citizen with all the other *nix and Windows distros, available self-service to anyone with an AWS account.

    February 08, 2009 in Cloud Computing | Permalink

    | Digg This | Save to del.icio.us |

    An Evolving Maturity Model for Clouds

    In a post on his Wisdom of Clouds blog last week, James Urquhart proposed five phases of Cloud Maturity.

    I fully concur with the phases of this maturity model, and with Urquhart's assessment of the current state of enterprise IT on the scale, i.e., most have consolidated, some have abstracted, fewer have automated, and only a handful are experimenting with metering and self service. None have achieved open Internet-based Market capability, yet.

    This maturity model is useful, but I can't say I find the first four phase names, order, or definitions to be novel in any way - I've been writing these exact same phases on client whiteboards for about three years.

    Maturity Model (MM) Hopping Disallowed

    The fifth phase (Market) however, is new and insightful, and it reshapes the preceding four in interesting ways. In other words, if you're on a path to reach Market maturity then certain capabilities must be addressed in preceding phases that weren't necessarily required in preceding models that stopped at Utility. For example, elements of service level management must be addressed in the Automation and Utility phases that were not essential prior to the proposed model. In a pre-Market maturity model enterprise IT could deliver automatic provisioning and pay-for-use to their customer without demonstrating compliance to specific service levels. That won't fly in a price arbitraged cloud Market, so these capabilities important to the Market phase must be built in the preceding phases that correspond to the capability. Maturity models are only useful if the phases inherit all the preceding phase capabilities. If additional Automation capabilities are required to achieve Market capability, then Automation was not really achieved at phase three.

    What of Elasticity?

    I'm not convinced that this is a comprehensive maturity model, and whether we can fit clouds, both public and private, into one vector such as this. For instance, where does Elasticity fit? Auto-scaling relies on Automation, but would we require it of any environment claiming to be Automated? The pay-for-use implication of Utility does not necessarily mean resources are acquired and released in conjunction with use - metering is not intrinsic to provisioning, and vice versa. Whereas Elasticity, I submit, implies growing and shrinking resources synchronously with customer demand. So, does Elasticity warrant it's own phase in the Cloud Maturity Model? What are the implications of this model for private clouds? Does a private cloud ever reach Market phase maturity?

    MM Drafted, Now Where's the Magic Quadrant?

    In any case, the proposal of a Cloud Maturity Model is a valuable step in the evolution of cloud computing, and the Market apex of the model seems like a reasonable goal. And there is an army of consultants forming to help enterprises address the climb.

    December 15, 2008 in Cloud Computing | Permalink

    | Digg This | Save to del.icio.us |

    The Promise of Crowdsourcing

    Today I came across the work of Mike Krieger and Yan Yan Wang at Stanford's HCI Lab in which they studied the efficacy of certain online brainstorming techniques.  Their research focused on comparison of idea generation tools to see if, through tool adaptations, it was possible to increase participation in expanding and improving ideas while overcoming the problems of too many ideas, not enough idea collaboration that are typical of brainstorming on discussion forums.  Part of their research revealed some great lessons from crowd sourcing endeavors on the Internet which Mike has shared in these slides posted on Slideshare.
    Crowds and Creativity
    View SlideShare presentation

    The most valuable, and I think uniquely insightful advice he gives are the 9 guidelines for successful crowd sourcing online:

    1. When diversity matters
    2. Small chunks/ delegate-able actions
    3. Easy verification
    4. Fun activity, or hidden ambition
    5. Better than computers at performing a task
    6. Learn from hacks, mods, re-use from crowd
    7. Enable novel knowledge discovery 
    8. Maintain vision & design consistency
    9. Not just about lower costs
    I see these tips as not only useful to the aspiring wiki-ist, but also to users of Mechanical Turk, which happens to be a tool that was instrumental in their Ideas2Ideas study.  I'm looking forward to applying these guidelines and the design principles behind Ideas2Ideas to future crowd sourcing endeavors.

    December 06, 2008 in Crowdsourcing, Participation | Permalink | Comments (0)

    | Digg This | Save to del.icio.us |

    And the Spoils Go to the First Vendor Supporting IaaS Standards

    Ian Kallen over at Technorati wrote a nice post about the cloud computing ontology and the subtleties of Infrastructure as a Service (IaaS).  I'm glad to know he's still working on the hard problems there at the blogsphere search engine after their recent cost cutting measures.  As he has said to me previously, he writes, "What I foresee is that the first vendor to embrace and commoditize a standard interface for infrastructure management changes the game."  I think he's right, particularly in his prediction that these standards will enable a market place in which workloads can be moved from cloud to cloud according to price, capacity, and feature criteria.  A few companies are jockeying for the pole position in the race to provide the arbitrage for this meta cloud that Ian envisions.  Rightscale is perhaps in the best spot for that right now.  But who's going to set the standards for interfacing with clouds.  It's still pretty early in the game, but there's no question that Amazon has a good leg up with the AWS API's, which are further butressed by Eucalytus's emulation of those interfaces in their open source Xen based IaaS stack.  Meanwhile, Ruv over at Enomaly is fostering a Unified Cloud Interface (UCI) standard to be submitted to the IETF next year.  Conspicuously, it appears that Amazon is involved in neither the Eucalyptus nor UCI standards efforts.  Meanwhile, Rightscale is working closely with Rich Wolski's Eucalytus team, and both of these standard bearers are advising on Sun's Network.com model.  It will be interesting to witness the evolution of agreed upon standard interfaces in the presence of the defacto standard that is AWS.  Until there's a cleaner and/or cheaper way to develop on OpenSolaris in the cloud, I'll continue to write to the AWS interfaces to launch and extend instances of OpenSolaris on EC2.

    December 04, 2008 in Cloud Computing | Permalink

    | Digg This | Save to del.icio.us |

    Our New Open Source President

    CNN ran a post election panel featuring Republican consultant Alex Costellano on Thursday who hinted we may be blessed with an open source presidency come January. Citing Obama's deference to the American public by admitting "I need your help," Costellano extrapolated that this is an unprecedent gesture to involve the people on making policy, opensource-stylie. Costellano rationalized his interpretation of the president elect's statement by effectively contrasting Microsoft's "worship in our church, or else," approach to software with Obama's apparent participation age bias. Nice! Bravo! Bring it!

    November 09, 2008 in Current Affairs, Participation | Permalink | Comments (0)

    | Digg This | Save to del.icio.us |

    Expect More Innovation in Cloud Tools

    Amazon.com released on Thursday a web GUI for AWS: the AWS Management Console.

    It's pretty slick, although still clearly a beta - some rough edges around navigation, and EC2 support only (no S3, SQS, Cloudfront, etc. yet). If you're running lots of diverse AMI's, this single view is a great decision making tool. Once they add Tagging (Label and group Amazon EC2 resources with your own custom metadata,) companies will be able to quickly see opportunities for optimization and grouping of operations, etc.

    The AWS announcement probably hurts RightScale, but this is their positive spin on it.

    This news from Amazon raises the bar on ease of use and the relative importance of self service in the cloud market. Once they've experienced the AWS Management Console or RightScale's dashboard, many enterprises will want their own private clouds to be built with clean UI's and web2.0 ease of use too. While a quality programmatic interface is vital to the scaling needs of cloud users, a simple and useful set of GUI controls is equally important for those primarily seeking the self service benefits of cloud computing.

    Entering the market without a comparable console will be a disadvantage for upstart public clouds, but this new prerequisite for clouds also creates an opportunity to up the ante further.

    A couple opportunities for value add come to mind:

    1. Social networking integration - one clear opportunity is to enhance cloud console functionality with existing social networks. Imagine a 37Signals interface that let's you plot the sequence of operations required to upgrade your complex app running across 1000 instances in Basecamp, a message to Twitter followers when specific operations complete, and a Livejournal post summarizing the status of the upgrade after completion - a social RESTful SOA for datacenter operations if you will.
    2. Modeling and Design tools - I expect companies like Smugmug wont use the AWS Dashboard and Control Panel features as is, but would use a GUI that could help model different deployment patterns and quickly sort through sequencing and dependency issues and compare performance characteristics of alternative architectures. (If you haven't read how Smugmug uses EC2 for their Skynet, check out Don MacAskill's post on it. A modeling tool might give Don a way to compare an SQS implementation with his home grown solution, and make a decision informed with real financial and performance inputs.)

    Other Cloud Computing news for the week ending 10-Jan-09:
    • Sun Acquires Q-Layer
    • Beta release of Cloud Foundry for deploying Java applications in EC2

    November 01, 2008 in Cloud Computing | Permalink

    | Digg This | Save to del.icio.us |

    It Don't Take a Weatherman to Know Which Way the Wind Blows - Except Inside the Enterprise

    With California's first real rain of the season forecast for Friday, it's time to take stock of another weather system affecting the West (and other places connected to the Internet): cloud computing. Summer 2008 saw a downpour of cloud offerings. We've witnessed whole business ventures billow up and evaporate on the cost and agility promises of cloud computing. While storm systems continue to build off the Pacific coast, the long range forecast is for an unstable system to dump on the landscape for a couple quarters before a high pressure cell clears the air. Despite the instability (and, as if this hackneyed weather metaphor needed more abuse,) it don't take a weatherman to know which way the wind blows - adoption of cloud computing will continue to rise.

    The storm of demand is fed by startups

    In the climate of "fail fast" startups, the appeal of cloud computing as a means of containing cost and improving productivity during the fragile stages of germination is obvious: skip over the infrastructure "muck" and keep your costs tied to your growth.  "Fixed costs are taboo" is the principle directive from many VC's investing in Web startups - put the employees on a sustenance + equity compensation plan, and, for God's sake, don't spend anything on compute infrastructure you don't absolutely need. 

    A major front accumulates in the enterprise

    But what about the enterprise?   Enterprises differ from startups in how they evaluate risk and how they spend on IT services.  In the enterprise computing landscape, risk averse business leaders are concerned with reliability and control over their services and their data.  Control is not one of the attributes primarily associated with cloud computing, security risks are a major barrier to enterprise adoption, and 99.9% availability is often not good enough for business critical and mission critical services.  Further, and for the time being, fixed costs are already baked into the equation in most IT business models.  In fact, most large enterprises treat IT as one big fixed cost, which it parcels out to business units according to some "fair share" cost allocation scheme.  

    Rarely are the business units of a large enterprise satisfied with their cost allocation, let alone the IT services it pays for, but they're captives of myriad barriers like technical complexity, regulatory compliance, data provenance, spending constraints, and limited organizational imagination. One or more of these factors are impediments to any serious consideration of public cloud computing for existing enterprise IT needs.  Business consumers of enterprise IT would like to have a secure, reliable, pay-as-you-go public utility service customized to their unique needs, but such a service does not exist. They'd use a public cloud for the cost and agility benefits if they perceived the risks to be acceptable, if their complex needs could be managed, and if they weren't already paying for IT services with funny money. Public cloud service providers are working on the availability concerns by committing SLA's, and certain security concerns by providing VPN's, but the reality is that the major refactoring of their huge software investments required in order to work in the public cloud will drive many enterprises to build their own cloud-like private infrastructure instead. In fact, any large enterprise is probably already doing this - the practice of building cloud-like infrastructure has been evolving for years under the cover of consolidation and virtualization initiatives.

    High clouds are approaching

    If predictions of mass consolidation onto public clouds proves true, then enterprise IT might be a dying breed of industrial infrastructure.  But just as it took electric power distribution decades to transition from local DC power generation to utility grids, traditional data center bound enterprise IT won't die easily.  Enterprises will strive for the  kind of efficiency that propels public cloud adoption by continuing investment in consoldiation and virtualization in their own data centers.  But consolidation and virtualization alone does not a cloud make, and will leave the consumers of enterprise IT with the same bucket of bits, still wanting for a cloud.  So when does one confer cloud status on a consolidated, virtualized environment?  The following simple criteria gives a pretty decent working definition:
    1. When it delivers IT resources as a metered service (rather than an asset or share of an asset,) and
    2. When all it's services can be accessed programatically.

    Yes, the implication here is that cloudhood can be achieved in a private implementation. (This potentially violates certain tenuous claims that cloud services must be provided offsite and by a 3rd party, and that clouds are accessed over the Internet, but we'll not constrain the discussion with those seemingly arbitrary distinctions.)

    Of course, the devil is in the details, so the next posts in this series will address more nuanced definitions of cloud computing. In particular, we'll examine the attributes of cloud computing as put forth by other aficionados, and what value and relevance these attributes have to business consumers of enterprise IT.


    Related reading

    • 10 Reasons Enterprises Aren’t Ready to Trust the Cloud
    • Creating a Generic (Internal) Cloud Architecture
    • Gartner Tech Forecast : Cloudy and Getting Cloudier
    • How clouds and grids are different

    October 30, 2008 in Cloud Computing | Permalink

    | Digg This | Save to del.icio.us |

    « Previous | Next »
    • d o w n s t r e a m