Who should use component content management?

Content authoring went through dramatic changes lately, brought by more and more complex demands on publishing. 10-20 years back publishing focused only 1 or 2 targets: printed and web (consumed on desktop computers). Since the number of possible publishing channels and diversity of target devices got multiplied.

Not only the layout must adjust to the variety of devices, but the information has to be filtered, personalized, in order to make it easy to access and relevant for the consumer. Yes, the way of the customer uses the content is slowly changing also. We have less time (and patience) now to read professional books the traditional way (from the beginning to the end) or search for a solution in a reference book. Instead we just want to ask a question and get the answer asap - from the machine.

The data flow got much wider and open, cross companies, which requires automated data / content flow integrations. The computer has to be able to merge information from different sources.

The artificial intelligence is going through fast development recently, still it’s a long way to go and it seems much easier if we structure, markup and enrich the content with explicit information, which makes it possible to automate content processing: inject, extract, merge, split information… on any necessary level. This way we can make our content fit to multiple purposes.

History of content authoring technologies

  • 1980s
    • Still most word processors use their own proprietary file format.
    • Character encoding is a challenge. A huge number of region / country specific character sets coexist.
    • 1985 - LaTex - plaintext typesetting is used for scientific documents.
    • 1986 - the SGML standard was born.
    • 1987 - TEI - Text Encoding Initiative
  • 1990s
    • 1991 - the first volume of the Unicode standard was published. It gets adapted very slowly.
    • SGML driven CMSes emerged. They store data and content in relational databases.
    • DocBook - a semantic markup language for technical documentation - is developed.
    • 1997 - XML became a W3C recommendation.
    • Metadata modeling can be done with RDF or Topic Maps.
  • 2000s
    • DITA (Darwin Information Typing Architecture) and open standard XML model for authoring and publishing was born.
    • Native XML databases emerged.
    • 2006 - OpenDocument (ODF) became an ISO standard (XML based)
    • 2008 - Microsoft replaced their proprietary format with their new Office Open XML format, became an ISO standard.
  • 2010s
    • DITA gains momentum against DocBook and proprietary semantic schemas.
    • Semantic web brings semantics into HTML
    • Linked data gains importance.

results matching ""

    No results matching ""