He has no friends, but he gets a lot of mail.

Thursday 3 December 2009

Single sourcing is a bad name for multiple sourcing

Michael Hiatt's The Myth of Single Source Authoring has caused a bit of comment in the technical communication arena.  But it's not an attack on single sourcing, at least the way I understand it. The clue is in his second paragraph:
And who will be our emerging heroes to fill the promise of content reuse and localization savings? Knowledge mashups and applications using cloud-based linked data and the emergence of the semantic Web.
Wait, what? Single sourcing is essential for knowledge mashups! Let me explain:



Single sourcing may be a bad name. Single sourcing does not mean "a tightly-controlled, single, authoritative source for all information, presented in a canonical form which will be used regardless of the output format or the audience." It certainly doesn't mean, as he puts it, "the belief that static authoring from a single vantage point from a single author paid by a single organization is a workable system". Of course it isn't. Wasn't that the precise thing single sourcing was developed to overcome?


For me, single sourcing means "for each piece of information, having an identifiable owner, and empowering that owner to act as a single source for that information, in whatever information use environment it is presented."
 

In the old days, every document had a single author (paid by a single organization), which meant that the same information was presented in different ways in different documents. And this is the most important point that Michael makes: there was nothing wrong with this, because a well-designed documentation set is broken up into documents aimed at different use environments, therefore each document should be written in a different way. The biggest mistake in the single-sourcing world is the idea that you can reuse authored topics effectively between use environments, but even Wikipedia knows that.


The author's job, in both traditional and single-sourced contexts, is to identify an information use environment, in fact, to enact an information use environment. "Information use environment" includes audience, language, culture, expectations, anything which affects how someone uses information. There are technically as many information use environments as there are occasions a person has to use information; but readers are malleable, and willing to mould their environment to some extent to fit with the information they have access to.


Once the use environment has been enacted, and agreed between author and reader, then the author can "suit" the information she presents to that environment. The difference between traditional authoring and single-source authoring is that the process of "suiting" information in a single-sourced system occurs at the single, original, source of the information, or at least at the point where it enters the author's domain, not at the point where it leaves to be assembled into a document.


For material which is authored in-house, the difference is small, since it is the author who originated the material. Likewise, for material which an organization aims at a very specific information use environment, the difference is small, since it is the author's organization that enacted that use environment.


So how can we benefit from single sourcing? The key is in that action of enacting. In a small company, the sole technical author bears responsibility for enacting the use environment. Because she enacted it, she finds she cannot reuse anyone else's information. As organizations grow, authoring teams share house standards, and the enacted use environments get codified so that authoring teams can successfully collaborate. If, as Michael says, "a writer seldom grabs a topic wholesale and places it into his or her document. Topics rarely meet all needs of the author and usually throw off the context and purpose of the document", this is a symptom of a lack of standards in the organization, so that individual authors are making their own decisions about the target audience.


Sometimes this is the right thing to do, and as an organization grows, naturally the number of use environments it is exposed to also grows; but there is always a core, the "standard documentation set", whose use environment has been fully enacted and formalized.


Which is where mashups come in. We cannot expect a mashup to be successful unless we share enacted use environments between organizations; ultimately, globally. But when this happens, it will be a revolution, because readers of any content from any organization will understand their role in the environment, it will become part of the culture of information use, not just part of the house style of an organization. (Look at the "enacted use environment" of pictograms in airports and other places: a truly global standard, which almost everyone thinks of as "intuitive" purely because it's so culturally ingrained.)


With single sourcing, once we have agreed (to whatever extent) to enact a particular use environment and write content for it, an organization will be able to re-use content from any other organization, anywhere, and it will fit in seamlessly. Without it, an organization will always have to rewrite information so that it speaks their language.


And that is why single sourcing is really multiple sourcing.

2 comments:

  1. I think I agree with pretty much all that David has to say here. I like the concept of multisourcing and the need of mashups to use single sourcing of already published articles.

    The comment that "to identify an information use environment, in fact, to enact an information use environment." What that means to me is to identify the context and relationship between author and reader and respond to those needs.

    I especially like this: "We cannot expect a mashup to be successful unless we share enacted use environments between organizations; ultimately, globally. But when this happens, it will be a revolution, because readers of any content from any organization will understand their role in the environment, it will become part of the culture of information use." I think this will be the key in semantic web, mashups, and linked data initiatives. We reuse content we trust, allowing us to pull in dynamic content from DBpedia which takes it from Wikipedia. This is when single-sourcing will work in the larger context of multisourcing.

    What I don't agree with is the claim that a company just needs standards to make single-sourcing work. In theory that may work, but not in fact. It's just not worth the effort. I have worked with companies who have enacted policies all over the place but in the end it leads to stilted writing, diluted content, and lots of money spent to structure information that has a shelf life of mere months. But I think that David's reasoning here is that content in the cloud can be dynamic, allowing for trust not just between organizations, but between user groups and social sites.

    I also think that David intimates that articles needs to be fully formed, not the overly-granular concepts I have used in docbook and DITA where you have a bucket of conceptual, reference, and procedural topics to pull together. I think the smalles part should be a combination of all of these to impart full knowledge to the reader. As a matter of fact, an old company of mine now has the concept of "topic clusters," or in my mind, a complete article.

    Another issues is the concept of in-house authoring altogether compared with outlets loyal to customers/users/readers rather than a slanted view of a product. This is perhaps something for another time.

    Michael Hiatt, Mashstream.com

    ReplyDelete
  2. Hi Michael, thanks for responding.

    When I say "enact" an information use environment I do mean identifying context and relationship, but I'm trying to stress that both parties have a role to play in defining that context/relationship. The reader does as much work in responding to the outlook of the document than does the author. In traditional authoring, that work has to be done as part of the cultural background. Perhaps in the future it will be possible for this dialogue to be more explicit (and more efficient.)

    I don't entirely agree that articles should be fully formed, for two reasons:

    1) as an organization we can minimize, with the complicity of our readers, the number of use environments we support. I can generally make the assumption that all Java software developers who want to use Component X, say, share a degree of culture and expectations which allows me to treat them the same. Therefore I can re-use at least some content between the articles I aim at that set of people.

    2) More practically, using granular topics is the expected way to enable content to be mashed up into one output. I suppose you could have a system based on "extension points" or something, but I like the clean design of separating maps from topics, and granularity is what you end up with when you accept that.#

    And when I mention standards, I include culture, unspoken conventions, etc. in addition to written policies. It may well be not worth the effort to codify tacit standards, but this is a general property of tacit corporate knowledge, not a failing in the idea of standards. When external organizations get involved, it may be the only way to communicate our expectations.

    ReplyDelete

Twitter / dajlinton