<acronym id="80kyi"></acronym>
<acronym id="80kyi"><optgroup id="80kyi"></optgroup></acronym>
<acronym id="80kyi"><center id="80kyi"></center></acronym>
<rt id="80kyi"></rt>
<acronym id="80kyi"><optgroup id="80kyi"></optgroup></acronym>
<rt id="80kyi"></rt>
<rt id="80kyi"><optgroup id="80kyi"></optgroup></rt>
<acronym id="80kyi"><small id="80kyi"></small></acronym>
<acronym id="80kyi"><center id="80kyi"></center></acronym><acronym id="80kyi"><small id="80kyi"></small></acronym>
<acronym id="80kyi"><small id="80kyi"></small></acronym>
Showing posts with label architecture. Show all posts
Showing posts with label architecture. Show all posts

Tuesday, January 19, 2010

What Can Enterprise Software Learn From CES? - Embrace Ubiquitous Convergence

One of the biggest revelations to me from my trip to CES is that the ubiquitous computing, once an academic concept, has finally arrived. The data, voice, device, and display convergence is evident from the products that I saw. There has been wide coverage of CES by many bloggers who track consumer technology. However, as a strategist and an enterprise software blogger, I have keen interest in assessing the impact of this ubiquitous convergence in consumer technology on enterprise software.

I believe that the consumers will soon start expecting the ubiquitous experience in everything that they touch and interact with ranging from their coffee cups to the cars and everything in between. This effect is going to be even more pronounced amongst millennial who grew up digitally and are entering into the workforce with an expectation of instant gratification. The mobile phone revolution was consumer-driven at large and Apple made the Smartphone category popular and appealing to non-enterprise consumers. These consumers slowly started expecting similar experience in enterprise software, because of which, many enterprise software vendors are now scrambling for making mobile a priority. I suggest that they learn a lesson from this and stay ahead of the curve when this ubiquitous convergence picks up momentum.

So what exactly does this mean to the enterprise ISVs?

Any surface can be an interface and a display:

I saw a range of new interface and display technology including pico projector, multi-touch screen by 3M, a screen with haptic feedback, and 3D gestural interfaces. A combination of a cheap projector and a camera could turn any surface into a display or an interface. The consumers will interact with software in unanticipated and unimaginable ways. This will put ISVs under pressure to support these alternate displays and interfaces. I see this as an opportunity for ISVs to differentiate their offering by leveraging instead of succumbing to this technology trend. Imagine a production floor that has the cameras and projectors mounted on all the walls. A maintenance technician could walk in and the maintenance information is projected on the machine itself which also doubles as a touch interface. The best interface is no interface. We all use software because we have to.

Location-based applications and geotagging will be a killer combination:

Google's Favorite Places and Nokia's Point and Find (that I saw at CES) are attempts to organize, and importantly, to own the information about places and objects using QR codes. The QR codes are fairly easy to generate and has flexible and extensible structure to hold useful information. The QR code readers are the devices that most of us already own - a camera phone with a working data connection. Combine geotagging with Augmented Reality that is already fueling the innovation in location-based applications, you have got a killer combination that could lead to some breakthrough innovation. This trend can easily be extended to the enterprise software to geotag objects and the associated processes from cradle-to-grave that provide contextual information to people when they interact with the software and the objects. This could lead to efficient manufacturing, smarter supply chain, and sustainable product life cycle management.

3D will go from "cool" to "useful" sooner than you think:

Yes, you and I will be wearing those 3D glasses in our living rooms and may be in our offices as well. Prada and Gucci might make them. What seems like beginning of 3D with movies, video games, and game consoles this area is going to explode with the opportunities. What is being designed as "cool" will suddenly be "useful". With the exception of a few niche solutions ISVs will likely brush off 3D as not relevant in the beginning until someone unlocks the pot of gold and everyone else will follow. Simply replicating 3D analog in a digital world will not make software better. Adding third dimension as an eye candy could actually introduce noise for the users that can look at the data in 2D more effectively. The ISV will have to hunt for the scenarios that amplify cognition and help users understand the data around certain business processes that are beyond their capacity to process in 2D. The 3D technology will be more effective when it is used in conjunction with complementing technology such as multi-touch interface to provide 3D affordances and with location-based and mapping technology to manage objects in 3D analog world.

The rendering technology will outpace non-graphics computation technology:

The investment into rendering hardware such as Toshiba's TV with the cell processors and graphic cards from ATI and nVidia complement the innovation in display elements technology e.g. LED, OLED, energy-efficient plasma etc. The combination of faster processor and sophisticated software is delivering hi-quality graphics at all form factors. The enterprise software ISV have so far focused on algorithmic computation of large volume of data to design various solutions. The rendering computation technology always lagged non-graphics data computation technology. Finally the rendering computation has not only caught up but it will outpace non-graphics data computation in some areas very soon. This opens up opportunities to design software that not only can crunch large volume of data but can leverage high-quality graphics without any perceived lag that delivers stunning user experience and realtime analysis and analytics.

Consumers will have "Personal Cloud" to complement the public cloud:

Okay, this is a stretch, but let me make an attempt to put all the pieces together. The consumers now have access to ridiculously powerful processors and plenty of storage in their set-top boxes, computers, appliances etc. These devices can be networked using wired and wireless devices that support wireless HDMI and USB 3.0. This configuration starts to smell like a mini "Personal Cloud" even though it does not have all the cloud properties. The public cloud, as we all know today, will mature and grow beyond utility computing and SaaS. The public cloud, the hardware that leverages IP6 and multicasting, and sophisticated CDN will see plenty of innovation ranging from streaming movies to calibrating carbon footprint of consumers against their neighbors. The public cloud and the personal cloud will complement each other in providing seamless ubiquitous user experience across all the devices. The ISV who will leverage the cloud and the channels to these consumers' devices have great potential to grow their portfolio of solutions that extends well beyond enterprise software and has a lot more productive and delighted users.

I don't want to predict what is a fad and what is the future but the convergence is clear and present. It is upto the ISVs to be innovative and find the golden nuggets and tune out the noise to deliver better business value to their customers.

On a side note, I really badly want this iPhone controlled AR.Drone - the coolest toy that I saw at CES!

Thursday, January 29, 2009

Open Source Software Business Models On The Cloud

There are strong synergies between Open Source Software (OSS) and cloud computing. The cloud makes it a great platform on which OSS business models ranging from powering the cloud to offer OSS as SaaS can flourish. There are many issues around licenses and IP indemnification and discussion around commercial open source software strategy to support progressive OSS business models. I do see the cloud computing as a catalyst in innovating OSS business models.

Powering the cloud:
OSS can power the cloud infrastructure similarly as it has been powering the on-premise infrastructure to let cloud vendors minimize the TCO. Not so discussed benefit of the OSS for cloud is the use of core algorithms such as MapReduce and Google Protocol Buffer that are core to the parallel computing and lightweight data exchange. There are hundreds of other open (source) standards and algorithms that are a perfect fit for powering the cloud.

OSS lifecycle management: There is a disconnect between the source code repositories, design time tools, and application runtime. The cloud vendors have potential not only to provide an open source repository such as Sourceforge but also allow developers to build the code and deploy it on the cloud using the horsepower of the cloud computing. Such centralized access to a distributed computing makes it feasible to support the end-to-end OSS application lifecycle on single platform.

OSS dissemination: Delivering pre-packaged and tested OSS bundles with the support and upgrades has been proven to be a successful business model for the vendors such as Redhat and Spikesource. Cloud as an OSS dissemination platform could allow the vendors to scale up their infrastructure and operations to disseminate the OSS to their customers. These vendors also have a strategic advantage in case their customers want to move their infrastructure to the cloud. This architectural approach will scale to support all kinds of customer deployments - cloud, on-premise, or side-by-side.

The distributed computing capabilities of the cloud can also be used to perform static scans to identify the changes in the versions, track dependencies, minimize the time to run the regression tests etc. This could allow the companies such as Blackduck to significantly shorten their code scans for a variety of their offerings.

Compose and run on the cloud: Vendors such as Coghead and Bungee Connect provide composition, development, and deployment of the tools and applications on the cloud. These are not OSS solutions but the OSS can build a similar business model as the commercial software to deliver the application lifecycle on the cloud.

OSS as SaaS: This is the holy grail of all the OSS business models that I mentioned above. Don't just build, compose, or disseminate but deliver a true SaaS experience to all your users. In this kind of experience the "service" is free and open source. The monetization is not about consuming the services but use the OSS
services as a base platform and provide value proposition on top of
that. Using the cloud as an OSS business platform would allow companies to experiment with their offerings in a true try-before-you-buy sense.

Monday, December 1, 2008

Does Cloud Computing Help Create Network Effect To Support Crowdsourcing And Collaborative Filtering?

Nick has a long post about Tim O'Reilly not getting the cloud. He questions Tim's assumptions on Web 2.0, network effects, power laws, and cloud computing. Both of them have good points.

O'Reilly comments on the cloud in the context of network effects:

"Cloud computing, at least in the sense that Hugh seems to be using the term, as a synonym for the infrastructure level of the cloud as best exemplified by Amazon S3 and EC2, doesn't have this kind of dynamic."

Nick argues:

"The network effect is indeed an important force shaping business online, and O'Reilly is right to remind us of that fact. But he's wrong to suggest that the network effect is the only or the most powerful means of achieving superior market share or profitability online or that it will be the defining formative factor for cloud computing."

Both of them also argue about applying power laws to the cloud computing. I am with Nick on the power laws but strongly disagree with him on his view of cloud computing and network effects. The cloud at the infrastructure level will still follow the power laws due to the inherent capital intensive requirements of a data center and the tools on the cloud would help create network effects. Let's make sure we all understand what the powers laws are:

"In systems where many people are free to choose between many options, a small subset of the whole will get a disproportionate amount of traffic (or attention, or income), even if no members of the system actively work towards such an outcome. This has nothing to do with moral weakness, selling out, or any other psychological explanation. The very act of choosing, spread widely enough and freely enough, creates a power law distribution."

Any network effect starts with a small set of something and it eventually grows bigger and bigger - users, content etc. The cloud makes it a great platform for such systems that demand this kind of growth. The adoption barrier is close to zero for the companies whose business model actually depends upon creating these effects. They can provision their users, applications, and content on the cloud and be up and running in minutes and can grow as the user base and the content grows. This actually shifts the power to the smaller players and help them compete with the big cloud players and yet allow them to create network effects.

The big cloud players, that are currently on the supply side of this utility mode, have few options on the table. They either can keep themselves to the infrastructure business and I would wear my skeptic hat and agree with a lot of people on the poor viability of this capital intensive business model that has very high operational cost. This option alone does not make sense and the big companies have to have a strategic intent behind such large investment.

The strategic intent could be to SaaS up their tools and applications on the cloud. The investment and control over the infrastructure would provide a head start. They can also bring in partner ecosystem and crowdsource large user community to create a network effect of social innovation that is based on collective intelligence which in turn would make the tools better. One of the challenges with the recommendation systems that uses collaborative filtering is to be able to mine massive information that includes users' data and behavior and compute the correlation by linking it with massive information from other sources. The cloud makes a good platform for such requirements due to its inherent ability to store vast amount of information and perform massive parallel processing across heterogeneous sources. There are obvious privacy and security issues with this kind of approach but they are not impossible to resolve.

Google, Amazon, and Microsoft are the supply side cloud infrastructure players that are already moving in the demand side of the tools business though I would not call them the equal players exploring all the opportunities.

And last but not the least, there is a sustainability angle around the cloud providers. They can help consolidate thousands of data centers into few hundreds based on the geographical coverage, availability of water, energy, and dark fiber etc. This is similar to consolidating hundreds of dirty coal plants into few non-coal green power plants that can produce clean energy with efficient transmission and distribution system.

Thursday, October 16, 2008

Greening The Data Centers

Recently Google published the Power Usage Efficiency (PUE) numbers of their data centers. PUE is defined as a ratio of the total power consumed by a data center to the power consumed by the IT equipments of the facility. Google's data centers' PUE ranges from 1.1 to 1.3 which is quite impressive. Though it is unclear why all the data centers have slightly different PUE. Are they designed differently or are they all not tuned to improve for the energy efficiency? In any case I am glad to see that Google is committed to the Green Grid initiative and is making the measurement data and method publicly available. This should encourage other organizations to improve the energy performance of their data centers.

The energy efficiency of a data center can be classified into three main categories:

1. Efficiency of the facility: The PUE is designed to measure this kind of efficiency that is based on how a facility that hosts a data center is designed such as its physical location, layout, sizing, cooling systems etc. Some organizations have gotten quite creative to improve this kind of efficiency by setting up an underground data center to achieve consistent temperature or setting up data centers near a power generation facility or even setting up their own captive power plant to reduce the distribution loss from the grid and meet the peak load demand.

2. Efficiency of the servers: This efficiency is based on the efficiency of the hardware components of the servers such as CPU, cooling fans, drive motors etc. has made significant progress in this area to provide energy-efficient solutions. Sun has backed up the organization OpenEco that helps participants assess, track, and compare energy performance. Sun has also published their carbon footprint.

3. Efficiency of the software architecture: To achieve this kind of efficiency the software architecture is optimized to consume less energy to provide the same functionality. The optimization techniques have by far focused on the performance, storage, and manageability ignoring the software architecture tuning that brings in energy efficiency.

Round Robbin is a popular load balancing algorithm to optimize the load on servers but this algorithm is proven to be energy in-efficient. Another example is about the compression. If data is compressed on a disk it would require CPU cycles to uncompress it versus requiring more I/O calls if it is stored uncompressed. Given everything else being the same, which approach would require less power? These are not trivial questions.

I do not favor an approach where the majority of the programmers are required to change their behavior and learn new way of writing code. One of the ways to optimize the energy performance of the software architecture is to adopt an 80/20 rule. The 80% of the applications use 20% of the code and in most of the cases it is an infrastructure or middleware code. It is relatively easy to educate and train these small subset of the programmers to optimize the code and the architecture for energy-efficiency. Virtualization could also help a lot in this area since the execution layers can be abstracted into something that can be rapidly changed and tuned without affecting the underlying code to provide consistent functionality and behavior.

The energy efficiency cannot be achieved by tuning things in separation. It requires a holistic approach. PUE ratios identify the energy loss prior to it reaches a server, the energy-efficient server requires less power to execute the same software compared to other servers, and the energy-efficient software architecture actually lowers the consumption of energy for the same functionality that the software is providing. We need to invest into all the three categories.

Power consumption is just one aspect of being green. There are many other factors such as how a data center handles the e-waste, the building material used, the green house gases out of the captive power plant (if any) and the cooling plants etc. However tackling energy efficiency is a great first step in greening the data centers.

Friday, September 12, 2008

Google Chrome Design Principles

Many of you would have read the Google Chrome comic-strip and also would have test driven the browser. I have been following few blog posts that have been discussing the technical and business impact but let's take a moment and look at some of the fundamental architectural design principles behind this browser and its impact on the ecosystem of web developers.
  • Embrace uncertainty and chaos: Google does not expect people to play nice. There are billions of pages with unique code and rendering all of them perfectly is not what Google is after. Instead Chrome puts people in charge of shutting down pages (applications) that do not behave. Empowering people to pick what they want and allow them to filter out the bad experience is a great design approach.
  • Support the journey from pages to applications to the cloud: Google embraced the fact that the web is transitioning from pages to applications. Google took an application-centric approach to design the core architecture of Chrome and turned it into a gateway to the cloud and yet maintained the tab metaphor to help users transition through this journey.
  • Scale through parallelism: Chrome's architecture makes each application a separate process. This architecture would allow Chrome to better tap into the multi-core architecture if it gets enough help from an underlying operating system. Not choosing a multi-threaded architecture reinforces the fact that parallelism on the multi-core is the only way to scale. I see an opportunity in designing a multi-core adaptation layer for Chrome to improve process-context switching since it still relies on a scheduler to get access to a CPU core.
  • Don't change developers' behavior: JavaScript still dominates the web design. Instead of asking developers to code differently Google actually accelerated Javascript via their V8 virtual machine. One of the major adoption challenges of parallel computing is to compose applications to utilize the multi-core architecture. This composition requires developers to acquire and apply new skill set to write code differently.
  • Practice traditional wisdom: Java introduced a really good garbage collector that was part of the core language from day one and did not require developers to explicitly manage memory. Java also had a sandbox model for the Applets (client-side runtime) that made Applets secured. Google recognized this traditional wisdom and applied the same concepts to Javascript to make Chrome secured and memory-efficient.
  • Growing up as an organization: The Chrome team collaborated with Android to pick up webkit and did not build one on their own (actually this is not a common thing at Google). They used their existing search infrastructure to find the most relevant pages and tested Chrome against them. This makes it a good 80-20 browser (80% of the people always visit the same 20% of the pages). This approach demonstrates a high degree of cross-pollination. Google is growing up as an organization!

Monday, July 21, 2008

SaaS platform pitfalls and strategy - Part 2

In part 1, I discussed my views on the top 10 mistakes that vendors make while designing a SaaS platform as described in the post at GigaOM. This post, the part 2, has my strategic recommendations to SaaS vendors on some of the important topics that are typically excluded from the overall platform strategy.

Don't simply reduce TCO, increase ROI: According to an enterprise customer survey carried out by McKinsey and SandHill this year, the buying centers for SaaS are expected to shift towards the business with less and less IT involvement. A SaaS vendor should design a platform that not only responds to the changing and evolving business needs of a customer but can also adapt to changing macro-economic climate to server customer better. Similarly a vendor should carve out a Go To Market strategy targeting the businesses to demonstrate increased ROI and not necessarily just reduced TCO even if they are used selling a highly technical component to IT.

The Long Tail
: SaaS approach enables a vendor to up-sell a solution to existing customers that is just a click-away and does not require any implementation efforts. A vendor should design a platform that can identify the customer's ongoing needs based on the current information consumption, usage, and challenges and tap into a recommendation engine to up-sell them. A well-designed platform should allow vendors to keep upgrade simple, customers happy, and users delighted.

Hybrid deployment: The world is not black and white for the customers; the deployment landscape is almost never SaaS only or on-premise only. The customers almost always end up with a hybrid approach. A SaaS platform should support the integration scenarios that spans across SaaS to on-premise. This is easier said than done but if done correctly SaaS can start replacing many on-premise applications by providing superior (non)ownership experience. A typical integration scenario could be a recruitment process that an applicant begins outside of a firewall on a SaaS application and the process gradually moves that information into an enterprise application behind the firewall to complete the new hire workflow and provision an employee into the system. Another scenario could be to process a lead-to-order on SaaS and order-to-cash on on-premise.

Ability to connect to other platforms: It would be a dire mistake to assume standalone existence of any platform. Any and all platforms should have open, flexible, and high performance interfaces to connect to other platforms. Traditionally the other platforms included standard enterprise software platforms but now there is a proliferation in the social network platforms and a successful SaaS player would be the one who can tap into such organically growing social networking platforms. The participants of these platforms are the connectors for an organization that could speed up cross-organizational SaaS adoption across silos that have been traditional on-premise consumers.

Built for change: Rarely a platform is designed that can predict the technical, functional, and business impact when a new feature is included or an existing feature is discarded. Take internationalization (i18n) as an example. The challenges associated to support i18n are not necessarily the resources or money required to translate the content into many languages (Facebook crowdsourced it) but to design platform capabilities that can manage content in multiple languages efficiently. Many platform vendors make a conscious choice (rightfully so) not to support i18n in early versions of the platform. However rarely an architect designs the current platform that can be changed predictably in the future to include a feature that was omitted. The design of a platform for current requirements and a design for the future requirements are not mutually exclusive and a good architect should be able to draw a continuum that has change predictability.

Virtualize everything: Virtualization can insulate a platform from ever-changing delivery options and allow vendors to focus on the core to deliver value to the applications built on the platform. A platform should not be married to a specific deployment option. For instance a vendor should be able to take the platform off Amazon's cloud and put it on a different cluster without significant efforts and disruptions. The trends such as cloud computing have not yet hit the point of inflection and the deployment options will keep changing and the vendors should pay close attention to the maturity curve and hype cycle and make intelligent choices that are based on calculated risk.

Vendors should also virtualize the core components of the platform such as multi-tenancy and not just limit their virtualization efforts to the deployment options. Multi-tenancy can be designed in many different ways at each layer such as partitioning the database, shared-nothing clusters etc. The risks and benefits of these approaches to achieve non-functional characteristics such as scalability, performance, isolation etc. change over a period of time. Virtualizing the multi-tenancy allows a vendor to manage the implementation, deployment, and management of a platform independent of constantly moving building components and hence guarantee the non-functional characteristics.

Don't bypass IT: Instead make friends with them and empower them to server users better. Even if IT may not influence many SaaS purchase decisions IT is politically well-connected and powerful organization that can help vendors in many ways. Give IT what they really want in a platform such as security, standardization, and easy administration and make them mavens of your products and platform.

Platform for participation: Opening up a platform to the ecosystem should not be an afterthought instead it should be a core strategy to platform development and consumption. In early years eBay charged the developers to use their API and that inhibited the growth which later forced eBay to make it free and that decision helped eBay grow exponentially. I would even suggest to open source few components of the platform and also allow developers to use the platform the way they want without SaaS being the only deployment option.

Platform Agnostic: The programming languages, hardware and deployment options, and UI frameworks have been changing every few years. A true SaaS platform should be agnostic to these building components and provide many upstream and downstream alternatives to build applications and serve customers. This may sound obvious but vendors do fall into "cool technology" trap and that devalues the platform over a period of time due to inflexibility to adopt to changing technology landscape

Monday, July 7, 2008

SaaS platform - design and architecture pitfalls - Part 1

I cannot overemphasize how critical it is to get the SaaS platform design right upfront. GigaOM has a post that describes the top 10 mistakes that vendors make while designing a SaaS platform. I would argue that many of these mistakes are not specific to a SaaS platform but any platform. I agree with most of the mistakes and recommendations, however I have quite the opposite thoughts about the rest. I also took an opportunity to think about some of the design and architectural must have characteristics of a SaaS platform that I will describe in the part 2 of this post.

1) Failing to design for rollback

"...you can only make one tweak to your current process, make it so that you can always roll back any code changes..."

This is a universal truth for any design decision for a platform irrespective of the delivery model, SaaS or on-premise. eBay makes it a good case study to understand the code change management process called "trains" that can track down code in a production system for a specific defect and can roll back only those changes. A philosophical mantra for the architects and developers would be not to make any decisions that are irreversible. Framing it positively prototype as fast as you can, fail early and often, and don't go for a big bang design that you cannot reverse. Eventually the cumulative efforts would lead you to a sound and sustainable design.

2) Confusing product release with product success

"...Do you have “release” parties? Don’t — you are sending your team the wrong message. A release has little to do with creating shareholder value..."

I would not go to the extreme of celebrating only customer success and not release milestones. Product development folks do work hard towards a release and a celebration is a sense of accomplishment and a motivational factor that has indirect shareholder value. I would instead suggest a cross-functional celebration. Invite the sales and marketing people to the release party. This helps create empathy for the people in the field that developers and architects never or rarely meet and this could also be an opportunity for the people in the field to mingle, discuss, and channel customer's perspective. Similarly include non-field people while celebrating field success. This helps developers, architects, and product managers understand their impact on the business and an opportunity to get to know who actually bought and started using their products.

5) Scaling through third parties

"....If you’re a hyper-growth SaaS site, you don’t want to be locked into a vendor for your future business viability..."

I would argue otherwise. A SaaS vendor or any other platform vendor should really focus on their core competencies and rely on third parties for everything that is non-core.

"Define how your platform scales through your efforts, not through the systems that a third-party vendor provides."

This is partially true. SaaS vendors do want to use Linux, Apache, or JBoss and still be able to describe the scalability of a platform in the context of these external components (that are open source in this case). The partial truth is you still can use the right components the wrong way and not scale. My recommendation to a platform vendor would be to be open and tell their customers why and how they are using the third party components and how it helps them (the vendor) to focus on their core and hence helps customers get the best out of their platform. A platform vendor should share the best practices and gather feedback from customers and peers to improve their own processes and platform and pass it on to third parties to improve their components.

6) Relying on QA to find your mistakes:

"QA is a risk mitigation function and it should be treated as such"

The QA function has always been underrated and misunderstood. QA's role extends way beyond risk mitigation. You can only fix defects that you can find and yes I agree that mathematically it is impossible to find all the defects. That's exactly why we need QA people. The smart and well-trained QA people think differently and find defects that developers would have never imagined. The QA people don't have any code affinity and selection bias and hence they can test for all kinds of conditions that otherwise would have been missed out. Though I do agree that the developers should put themselves in the shoes of the QA people and make sure that they rigorously test their code, run automated unit tests, and code coverage tools and not just rely on QA people to find defects.

8) Not taking into account the multiplicative effect of failure:

"Eliminate synchronous calls wherever possible and create fault-isolative architectures to help you identify problems quickly."

No synchronous calls and swimlane architecture are great concepts but a vendor should really focus on automated recovery and self-healing and not just failure detection. A failure detection could help vendor isolate a problem and help mitigate the overall impact of that failure on the system but for a competitive SaaS vendor that's not good enough. Lowering MTBF is certainly important but lowering MDT (Mean down time) is even more important. A vendor should design a platform based on some of the autonomic computing fundamentals.

10) Not having a business continuity/disaster recovery plan:

"Even worse is not having a disaster recovery plan, which outlines how you will restore your site in the event a disaster shuts down a critical piece of your infrastructure, such as your collocation facility or connectivity provider."

Having a disaster plan is like posting a sign by an elevator instructing people not to use it when there is a fire. Any disaster recovery plan is, well, just a plan unless it is regularly tested, evaluated, and refined. Fire drills and post-drill debriefs are a must-have.

I will describe some of the design and architectural must have characteristics of a SaaS platform in the part 2 of this post.

Wednesday, February 20, 2008

Scenario-based enterprise architecture - CIO’s strategy to respond to a change

Scenario-based planning is inevitable for an enterprise architect. The changing business models, organizational dynamics, and disruptive technology are some of the change agents that require enterprise architecture strategy to be agile enough to respond to these changes. The CIO.com has a post on a to respond to a possible change in the strategic direction due to a new CEO.

For CIOs, the key question is how to turn IT into an asset and a capability to support the business and not to become an IT bottleneck that everyone wants to avoid or circumvent. Strategic IT planning that is scenario-based, transparent policies, and appropriate governance could help the enterprise architecture from falling apart and build capabilities that serves the business needs and provides them with the competitive advantage.

To be tactical and strategic at the same time is what could make many CIOs successful. In my interaction with CIOs, I have found that some of their major concerns are organizational credibility and empowerment. CIO is often times seen as an inhibitor by the business people and it is CIO’s job to fix that perception. To be seen as a person who can respond to business needs quickly and pro-actively can go a long way to fix this perception. You cannot really plan for all the possible worst case scenarios but at least try to keep your strategy nimble and measures in place to react to the ones that you had not planned for and act ahead of time on the ones that you did plan for.

Sunday, September 9, 2007

The eBay way to keep infrastructure architecture nimble

eBay has come a long way from the infrastructure architecture perspective from a system that didn't have any database to the latest Web 2.0 platform that supports millions of concurrent listings. An interview with eBay's V.P of systems and architecture, James Barrese, The eBay way describes this journey well. I liked the summary of the post:

"Innovating for a community of our size and maintaining the reliability that's expected is challenging, to say the least. Our business and IT leaders understand that to build a platform strategy, we must continue to create more infrastructure, and separate the infrastructure from our applications so we can remain nimble as a business. Despite the complexity, it's critical that IT is transparent to our internal business customers and that we don't burden our business units or our 233 million registered users with worries about availability, reliability, scalability, and security. That has to be woven into our day-to-day process. And it's what the millions of customers who make their living on eBay every day are counting on us to do."

eBay's strategy to focus on identifying the pain points early on and solving those problems first and keep the infrastructure nimble to adapt to growth has paid off. eBay focused on an automated process to roll out the weekly builds into their production system and tracking down the code change that could have destabilized a certain set of features. The most difficult aspect of sustaining engineering is to isolate the change that is causing an error; fixing the error once the root cause is known is relatively easy most of the times. eBay also embraces the fact that if you want to roll out changes quickly, the limited QA efforts, automated or otherwise, are not going to guarantee that there won't be any errors. Anticipating errors and have a quick plan to fix it is a smart strategy.

If you read the post closely you will observe that all the efforts seem to be related to the infrastructure architecture such as high availability, change management, security, third-party API, concurrency etc. ebay did not get distracted by the Web 2.0 bandwagon early on and instead focused on platform strategy to support their core business. This is a lesson that many organizations could probably learn that be nimble and do what your business needs and don't get distracted by disruptive changes, instead embrace them slowly. Users will forgive you if your web site doesn't have round corners and does not do AJAX, but they won't forgive you if they could not drum up their bid and lost the auction because the web site was slow or was not available.

One of the challenges eBay faced was lack of any good industry practices for similar kind of requirements since eBay was unique in a way it grew exponentially and had to keep changing their infrastructure based on what they think is the right way to it. eBay is still working on grid infrastructure that could standardize some of their infrastructure and service delivery platform architecture. This would certainly alleviate some of the pains that they have from their proprietary infrastructure and could potentially become the de facto best practices for the entire industry to achieve the best on-demand user experience.

eBay kept it simple - a small list of trusted suppliers, infrastructure that can grow with users, and a good set of third party API and services to complete the ecosystem to empower users to get the maximum juice out of their platform. That's the eBay way!

Tuesday, September 4, 2007

SugarCRM hops on to multi-instance on-demand architecture bus

SugarCRM announced Sugar 5.0 which has multi-instance on-demand architecture. This is opposite to multi-tenancy model where many customers, if not all, share single instance. Both models have their pros and cons and adding flexibility for an on-premise option complicate the equation a lot. But the fact is that many customers may not necessarily care what on-demand architecture the products are being offered at and any model can be given a marketing spin to meet customers’ needs.

The multi-instance model resonates well with the customers that are concerned with the privacy of their data. This model is very close to an on-premise model but the instance is managed by a vendor. This model has all the upgrade and maintenance issues as any on-premise model but a vendor can manage the slot more efficiently than a customer and can also use utility hardware model and data center virtualization to certain extent. The customizations are easy to preserve for this kind of deployment, but there is a support downside due to each instance being unique.

Multi-tenant architecture has benefits of easy upgrade and maintenance since there is only one logical instance that needs to be maintained. This instance is deployed using clusters at the database and mid-tier levels for load balancing and high availability purposes. As you can imagine, it is critical that architecture supports "hot upgrade". You take the instance down for scheduled or unscheduled downtime and all your customers are affected. The database vendors still struggle to provide a good high available solution to support hot upgrades. This also puts pressure on application architects to minimize the upgrade or maintenance time.

And, this is just a tip of an iceberg. As you dig more into the deployment options, you are basically opening a can of worms.

Tuesday, July 3, 2007

SOA ROI - interoperability and integration

If you are a SOA enabled enterprise application vendor trying to sell SOA to your customers you quickly realize that very few customers are interested in buying SOA by itself. Many customers believe SOA investment to be a non-differential one and they compare that with compliance – you have to have it and there is no direct ROI. A vendor can offer ROI if the vendor has the right integration and interoperability strategy. For customers it is all about lowering the TCO of the overall IT investment and not about looking at TCO of individual applications. SOA enabled applications with standardized, flexible, and interoperable interfaces work towards the lower TCO and provide customers sustainable competitive advantage. Generally speaking customers are not interested in the "integration governance" of the application provider as long as the applications are integrated out-of-the-box and has necessary services to support inbound and outbound integration with customer's other software to support customer's vision of true enterprise SOA.

It has always been a long debate what is a good integration strategy for SOA enabled products. Organizations debate on whether to use the same service interfaces for inter-application and intra-application integrations. Intra-application integration have major challenges, especially for large organizations. Different stakeholders and owners need to work together to make sure that the applications are integrated out-of-the-box. It sounds obvious but it is not quite easy. In most cases it is a trade off between to be able to "eat your own dog food" by using the published interfaces versus optimizing performance by compromising the abstraction by having a different contract than inter-application integration. There are few hybrid approaches as well that fall between these two alternatives, but it is always a difficult choice. Most of the customers do not pay too much attention to the intra-application strategy, but it is still in the best interest of a vendor to promote, practice, and advocate service-based composition against ad-hoc integration. There are many ways to fine tune the runtime performance if at all this approach results into performance degradation.

The other critical factor for ROI is the interoperability. The internal service enablement doesn't necessarily have to be implemented as web services, but there is a lot of value in providing the standardized service endpoints that are essentially web services that have published WSDL and WS-I profile compliance. The interoperability helps customer with their integration efforts and establish trust and credibility into the vendor's offerings. I have also seen customers associating interoperability with transparency. Not all the standards have matured in the area of Web Services and that makes it difficult for a vendor to comply to or to follow a certain set of standards, but at the minimum vendors can decide to follow the best practices and the standards that have matured.

Sunday, June 17, 2007

SOA Governance - strategic or tactical?

It is both. SOA governance is not much different than any other kind of governance in an organization. Successful SOA governance cannot be achieved without people framework. Socioeconomic factors such as organizational dynamics (I think it is a good synonym for politics) drive the SOA strategy for an organization.This is especially true for IT organizations where the organizations are on the supply side of SOA for their product offerings. Many people miss the fact that the governance efforts are not only limited to the internal employees in an organization but are typically extended to customers and partners. Many organizations co-innovate with customers and partners and these partners and customers significantly influence the SOA governance policies of an organization.

Many architects view SOA governance as a technical challenge, but I beg to defer. Strategic SOA governance is not just a technical problem; it is a business and process problem that has socioeconomic implications. I already talked about the people part. Talking about SOA economics, there is no good way to calculate ROI based on just SOA. Few people have actually tried doing this and I am not sure if this is a right model. Number of services or number of reusable services or any other QoS for SOA don't help to build an economic metrics. SOA is quite intertwined with the business and it is your guess versus mine in extracting a monetary value out of it. Having said this, people do work hard on making a business case for their organizations since SOA is hard to sell.

The strategic to tactical transformation of SOA is not easy. This is where people argue on several reference architectures, policies etc. These are very time consuming and dirty efforts and include several technical, domain, and functional discussions. Cross-functional team works well to tackle this kind of governance problem since it is critical to have a holistic (horizontal) view of SOA with enough help from the experts in several (vertical) areas. SOA architects have to have good people and project management skills since as I already mentioned governance is not just a technical problem. If you are a technical architect, you end up with a diagram like this. This diagram does not help anyone since it mixes a lot of low level details with high level details and the information is difficult to consume. Communicating the architecture is one of the difficult challenges for an architect and it even becomes more difficult if you are describing strategic SOA governance.
<acronym id="80kyi"></acronym>
<acronym id="80kyi"><optgroup id="80kyi"></optgroup></acronym>
<acronym id="80kyi"><center id="80kyi"></center></acronym>
<rt id="80kyi"></rt>
<acronym id="80kyi"><optgroup id="80kyi"></optgroup></acronym>
<rt id="80kyi"></rt>
<rt id="80kyi"><optgroup id="80kyi"></optgroup></rt>
<acronym id="80kyi"><small id="80kyi"></small></acronym>
<acronym id="80kyi"><center id="80kyi"></center></acronym><acronym id="80kyi"><small id="80kyi"></small></acronym>
<acronym id="80kyi"><small id="80kyi"></small></acronym>