David Chappell


Get the Feed! Subscribe

What, Why, and How: Communicating with Different IT Audiences  
# Wednesday, April 19, 2017
I spent the first several years of my career writing code. For all of that time, I divided the technology world into two groups: developers and non-developers. I didn’t have much respect for the second group; they were mostly IT managers and marketing people, and they weren’t very technical. Even worse, they didn’t make decisions based solely on which option provided the best technical solution, an approach I thought was inexplicable.

When I first moved from writing code into writing books and giving presentations, I held onto this perspective. My audiences were largely developers, and like me, they knew that technical arguments were all that mattered. In fact, we agreed that unless you really understood the details of competing technologies, you could never make good decisions.

But I was wrong. I’ve now spent many years working with both groups of people, and I’ve learned that the best technology isn’t necessarily the best choice. Even more important, a deep technical understanding of the options isn’t necessary to make a good decision. I’ve come to have a great deal of respect for IT managers and marketing people.

The truth is that different audiences care about different things. When I’m talking to developers, I still focus on what a technology is and how it works—this is what developers care about. But when my audience is IT managers or marketing people or other less technical folks, I briefly describe the what, then move on to why they should care about it. These people don’t need to know how to use something—the what and the why are far more important.

If you’re trying to communicate with different IT audiences, you might find it helpful to be clear about this difference. This is especially true if you’re trying to sell something. Developers rarely sign checks—they’re not usually the final decision maker—and telling a deeply technical story to IT managers won’t persuade them. The thing to remember is this: developers care most about what and how, while IT managers care about what and especially why. Give each audience the information it needs--and only the information it needs--and you’re likely to be significantly more successful.

0 comments :: Post a Comment


Why Microsoft is Serious About Open Source  
# Monday, October 31, 2016
Open source software has had a huge impact on our industry. Over the last several years, just about every big IT vendor, including Microsoft, has embraced this approach to some degree. Now with Azure, Microsoft is telling us that it doesn't care whether we use open source software or Microsoft's own technologies.

Really? Can they be serious? Has Microsoft embraced open source this completely? The answer is yes, and here's why.

In the traditional software model, vendors made money through selling software licenses, as shown below.

In this approach, the vendor provides software that runs on the customer's premises, and the customer pays the vendor a one-time license fee. While there might also be annual maintenance fees, the bulk of the money the vendor gets is typically from this initial license.

This makes open source software, which typically shrinks or eliminates the license fee, a threat to the vendor's revenue. Steve Balmer famously called open source a cancer. I don't know what was in his mind when he said this, but open source is certainly a cancer on the margins of the traditional license-based software business.

Today, though, this model is being replaced by cloud services. The picture now looks like this.

In this situation, the vendor runs the software, and the customer pays a monthly usage fee. Whether the software that provides a cloud service is open source or proprietary or some combination of the two doesn't typically have much impact on what the customer pays. They're paying for the service rather than licensing the software.

This is why Microsoft is serious about open source in the cloud. Offering open source services, such as Azure's support for Linux, Node.js, and Hadoop, just gives Microsoft more things for customers to use. Because there's no software license revenue to protect, Microsoft need not care about what kind of software it deploys to provide a cloud service.

In other words, offering cloud services using open source software lets Microsoft make more money. And we should always trust Microsoft to do the things that will make them the most money.

In the pre-cloud era, open source was spreading into more and more areas, so much so that it was getting harder and harder for software companies to make money from traditional licenses. With the rise of cloud computing, this problem goes away, since vendors are now charging for usage. Maybe the cloud came along just in time to save the software business from the margin-destroying cancer of open source.

2 comments :: Post a Comment


New Whitepapers: The Microsoft Data Platform  
# Thursday, March 17, 2016
After decades of dullness, data is back in vogue. As part of this, we're seeing an increasingly diverse set of data technologies available. Taken as a group, these technologies can be viewed as a platform for working with data.

I've written a set of three papers describing the Microsoft data platform today. Each paper covers the technologies for working with a specific kind of data--operational, analytical, or streaming--and each one is meant to be readable on its own. They're also meant to hang together as a group, which is why each one starts with the same big-picture diagram of this broad set of technologies. That diagram looks like this:
Each paper describes a particular column in this figure, and all three take a scenario-oriented view--they're not deep technology tutorials. The core audience is IT leaders, but I hope they're useful for anybody looking for a broad survey of what Microsoft offers today for working with data.

The papers, all sponsored by Microsoft, are available here:

0 comments :: Post a Comment


SOA Lives! APIs and Microservices  
# Wednesday, February 17, 2016
A dozen years ago, service-oriented architecture (SOA) was all the rage. The idea of exposing application services in a standard way (which at the time meant via SOAP) was so attractive. Why not remake our software to reflect the then-new agreement on how applications should communicate?

But the SOA bubble burst pretty quickly. It turned out that solving the technical problem of communicating between software wasn't enough to solve the real problems. In particular, organizations had a very hard time agreeing on what services applications should expose, how those services should be versioned, and who should pay for what. Much like the software reuse bubble engendered by the advent of objects, and for many of the same reasons, the enterprise dream of universal integration through SOA didn't work out for most organizations.

Yet today, the descendants of SOA live on. Rather than focus on enterprise integration, each of these descendants picked up on a stream of SOA thought and took it further, eventually finding real success. The two most important of these are:
  • API management, where cloud-based services provide a standard mechanism for exposing, managing, and controlling access to software of various kinds. The dominant protocol is now REST, not SOAP, but the idea has gone mainstream through offerings from smaller firms (e.g., Apigee) to big ones (e.g., Microsoft and Amazon). In fact, API management has become so important that CA thinks it's worth running ads in the New York Times to explain the idea to non-technical readers.
  • Microservices, where applications are built from self-contained chunks of code that interact via interfaces.. Rather than the grand enterprise integration schemes that drove much of the original SOA hype, microservices are primarily about building a single application. This simplifies communication--you can often dispense with authentication, for example--while still providing a way to create manageable, easily deployable application components. Once again, the big vendors are here, providing technologies such as Microsoft's Service Fabric to support this approach.
When a new technology appears, it's always hard to know how best to use it. When SOAP first showed up, it kicked off the original SOA thrust toward enterprise integration. This was certainly a worthy goal, but over time, it's become evident that API management and microservices are the approaches that actually worked. It's also become apparent that the complexity of SOAP and its fellow travelers wasn't required--a RESTful approach (or with microservices, maybe something simpler) was usually good enough.

The startup that was SOA a dozen years ago has pivoted to become the much more successful API management and microservices of today.

0 comments :: Post a Comment


New Whitepaper: Introducing Azure Machine Learning  
# Wednesday, August 05, 2015
Machine learning has become a big deal. The rise of big data and the massive computing power made possible by cloud computing have made this set of technologies much more useful.

But machine learning isn't especially simple. While the basics are fairly straightforward, they're cloaked in odd terminology, phrases like "training data" and "supervised learning". For data scientists, people with years of specialized training, this isn't a problem. For non-specialists, though, the topic can be off putting.

To perhaps help with this, I've written a Microsoft-sponsored introduction to Azure Machine Learning (ML) . The paper's subtitle is A Guide for Technical Professionals, and that's exactly what it is: an introduction to machine learning for ordinary mortals. Azure ML is likely to become a broadly used technology, and so knowing the basics of machine learning is important. The paper's goal is to help you do this, using Azure ML as a concrete example.

0 comments :: Post a Comment


New Whitepaper: Introducing Azure Search  
# Wednesday, April 15, 2015
For most of us, talking about search makes us think of Google (and maybe Bing). But for people who build applications, talking about search should bring something else to mind: the possibility of building a search box directly into a custom application's user interface. It's possible to do this with Google or Bing, but this approach has some limitations. Rather than relying on existing search services, creating a search UI for which you can control the results can have a lot of appeal.

One way to do this is to use Elasticsearch . A simpler option, though, is to use a managed search service such as Microsoft's recently announced Azure Search. Azure Search isn't designed for end users. Instead, it's accessed by applications via a RESTful interface. The goal is to make it straightforward for developers to add search to the UI of the applications they build.

I've written a Microsoft-sponsored introduction to Azure Search, available here, that explains why adding search to custom apps makes sense. The paper also walks through the basics of the technology, giving you a big-picture sense of what Azure Search does and how it works.

I don't know about you, but I love search UIs. If every application I use offered at least the option of search, I'd be a happy man. The availability of Azure Search is a step on the road to making this happen.

0 comments :: Post a Comment