Welcome to DITA World 2018 Day One! It was announced that there was well over 3,300+ attendees registered for this—a new record! I appreciate the opportunity that Adobe has given me to share my notes with you for the second year in a row. Like in 2016 and 2017, the conference is hosted and moderated by Adobe TechComm Evangelist Stefan Gentz. This year his co-moderator is Matt Sullivan.

Welcome Address

The day started out with a “Welcome” presentation from Dawn Stevens, from Comtech Services who is also the CIDM president. Dawn started her welcome note “Intelligent Content for an Intelligent World” by reminding us that we don’t often take a moment to take the time to think and compose yourself, yet a lot can happen in 60 seconds! Things that can happen in 60 seconds include 375,000 apps downloaded, 600-page edits in Wikipedia, three million Facebook shared posts, 3.7 million Google searches, and 18 million texts sent—and much more! This makes each of us information factories—we create 2.5 quintillion bytes of data a day on average globally!

However, not all info created equally. Most of it is text-heavy and unstructured, and if we can find it, it might be false information. Information is harder to decipher when the fake information seems credible. Inaccurate information can also be problematic, especially with health-related information. Conflicting information can also be problematic. Much of our information is duplicate information as well. We even have to contend with outdated information like outdated political sites are also holding space on the web.

What can users do when they have these obstacles, and how can tech comm can help fix this? The solution is intelligent content for an intelligent world. Dawn used Ann Rockley’s definition of intelligent content to say what it should be, and it was appropriate. The talks for this conference will lay that out!

Unifying Structured Content and Augmented Reality

The Future of Product Information and After-Sales

The event continued with Uli Henningsen’s keynote talk about unifying structured content and augmented reality. Uli works for Amplexor, which is a global company found in 22 countries and helps clients create digital experience websites, technical documentation, and translation among other services. Among those services is also helping clients create content for augmented reality, usually known as “AR”!

Before he started talking about AR, Uli showed an example of poorly written instructions he had received with a Bluetooth speaker he was given. The technical document was unreadable as the font in the instructions were written in a very tiny font, and the text wasn’t very understandable either. It wasn’t user-friendly for the user, and an example of how UX is overlooked, especially from a marketing perspective.

He continued by pointing out that having one comprehensive repository as in Adobe Experience Manager (AEM) helps immensely. AEM is in many of the world’s top companies. AEM offers everything you need for online services—CMS, campaigning, and service integration. With the XML Documentation solution that Adobe offers that works with AEM, you can create a CCMS (Component Content Management System). The solution architecture process involves authoring, uploading to AEM, then collaboration with others on AEM. It can work in reverse as well with collaboration on AEM, and authors making the adjustment. This process makes it easier and more consistent for publishing, translation, reports, review, and approval, and search. A DITA web editor can be used in AEM to create documentation assets. Collaborators can be authors, reviewers, approves, and system admins. You can use existing workflows and integrate easily with AEM. The added benefit is that AEM is also able to have an enterprise-class CCMS, which provides a unified strategy and consistency experience.

As for augmented reality,AR augments a real-world view, such as holograms (think HoloLens) or other superimposed graphics/data, directly on a real object allowing for more precision in instructions. AR is gaining momentum through widespread use in smartphones, tablets, and glass devices in various applications. AR and VR (virtual reality) are expected to grow exponentially in the next few years. Based on studies done in 2017 by ABI Research, we’ll be getting away from mobile and going more for smart glasses going forward. AR hardware is usually either mixed reality headsets, monocular (glasses), or smartphones. One example Uli provided was something developed by Facebook’s AR studio on behalf of a campaign for L’Oréal. In the experience, users could use their phone cameras to look through a special app that would add a visual layer of cosmetics on the person’s face, so they could see what they looked like in makeup. Apple recently acquired an AR glasses company, so it looks like they are looking for breakthrough products in a couple of years as well. Tim Cook was quoted as saying that we’ll wonder how we lived without AR someday soon! Expectations are extremely high.

Uli showed us an example of how his company, Amplexor, created AR documentation for their client, Leybold. Leybold makes vacuum pumps for manufacturing and technology, and the documentation was used for training and maintenance. Uli showed a cool video of how it worked showing steps, where the virtual parts were visually layered over physical parts. Leybold is using the guided maintenance procedures for users without any technical knowledge, as it allows workers to work hands-free, and 33% faster than standard documentation. There’s in-app access to e-commerce platforms for ordering spare parts, and the AR app provides IoT-ready visualization in real-time data. There are a lot of strategic benefits to AR—technology is already there.

Amplexor created enabling software called Re’flekt One to help facilitate bringing AEM documentation with 3-D/CAD data to be about to publish AR information that can be used visually on any device that runs AR. It’s in these circumstances that the need for structured content is apparent. With unstructured content, there are no specific rules, no specific characteristics are defined, and it’s difficult to extract content in predefined format compared to structured content.

Using a DITA publishing workflow of integrating resources (images, 3dobject, animations) and text using DITA optimizes better content for the AR experience. Uli showed how that direct integration with AEM –which used FrameMaker for the initial writing—using a FrameMaker/XML from an AEM assets folder along with the images and animations and 3D stuff allowed the content to be used as a unified content source by creating and maintaining steps inside AEM with all assets (text, images, etc.) and publishing to a device-independent application like Re’Flekt One to any device. Tech comm is gearing up for creating exciting and great user experiences when being part of preparing this new technology!

Moving to DITA without Losing Your Soul

The Palo Alto Networks success story

Laralyn Melvin and Bernadette Javier of Palo Alto Networks spoke next about moving to DITA without losing your soul, based on their own experiences in making the leap to using DITA at their company.

The first step in their journey was defining their documentation strategy as a fast-growing company where enterprise clients demanded enterprise quality documentation. They were starting from scratch and found that the best way to start was to identify a list of problems and define a documentation strategy. Their top problems were accessibility, lack of content, and findability. They determined that their solutions involved centralizing their document portal, creating dynamic linking, creating topic-based content, and write, write, write!

They had three strategy options: Traditional web help (off the shelf product; simplest, but siloed from other web properties), integrated web help (also off the shelf, but not designed for web documentation), or an enterprise CMS solution (the dream solution to create centralized documentation that would be seamless to users). Fortunately, their marketing department was also looking at AEM, which made making a company-wide decision easier, as this way, everything would be consistent in the inherent look and feel of the documentation.

Content needed to be task-based. They had to author large amounts of content with a small team and deliver LOTS of content. The team was already using FrameMaker, so that helped because they could integrate into AEM without issues, even using unstructured content. The built-in plug-in from FrameMaker was used to export and move all content to AEM based on formatting tags. Ultimately, the content vision was a hybrid content model that was facet driven, customer focused, completed workflows, provided categories, not chapters, and was SEO optimized.

Palo Alto Networks felt that they could help customers on their experience journey by focusing on findability processes, allowing customers to be able to browse by product category, and providing a robust search engine, the ability to narrow search results (using search tagging and keywords) to provide a hybrid web-book model.

The search was a SOLR-Based search (open source) which allowed them to customize functionality, provide keyword matching algorithms, index text in HTML and PDF using tagging and filtering by facets. They extended that by providing customers options for related documents and constrained searches on each page, and built in a commenting section through Adobe LiveWire. They also found that measuring customer engagement with Adobe Analytics helped as web analytics, customer adoption of documents, identification of most used topics, and top user demographics helped Palo Alto Networks learn their customers’ journeys.

The move to DITA using XML documentation for AEM was ultimately made because they realized the switch to structured documentation would help them in scaling and ease of producing these large amounts of documentation. They found that unstructured FrameMaker yielded poor XML, it was error-prone and time-consuming, yielded poor and limited graphics and media, and bloated translation costs. They beta-tested for a while learning how to use with structured content first, using the FrameMaker plug-in to upload XML FrameMaker files to AEM. They could use the XML documentation in AEM to generate HTML and PDF documents as final projects. To make this happen, they had to build templates to help make the conversions through the plug-in.

There was no easy way to get it done, as it’s time-consuming, they didn’t have a lot of information to refer to on how to do the process, but they felt that migrating existing content to DITA was their best bet.

Their process meant preparing content through retagging, automatic conversions by tagging and mapping to DITA elements, validating content manually, and syncing content manually to include the most recent content changes. This was followed up with Sitegen PDF generation by testing XML Documentation for AEM, and validating HTML and PDF by testing web pages and PDF document outputs.

They discovered that the process streamlined workflows which enabled better agility with individual topics, single-sourcing, bulk tagging, cost reductions, and desired HTML and PDF outputs. This also helped with translation costs as well. As a result, they found that writers could focus on writing as they could separate style from content by using DITA as a standard. They reaped the benefits of consistency, XML source files, metadata, and pre-validation.

Their TechDocs Portal 2.0 is being built from scratch with improvements that include dynamic and reusable content, consistent interfaces, robust landing pages, and improved navigation and usability.

Bernadette demonstrated the portal in developer mode, and showed it even comes with a Google search experience, version switchers based on tags, and can funnel customers through journeys with topics and videos, bringing all reusable content together to provide robust searches complete with filters.

Bernadette and Laralyn answered questions about their translation processes, how they leverage the FrameMaker plug-in to write and integrate into AEM, and how and who made some of these choices, including ensuring that the content would be displayed responsively for any device, including mobile. The main writing is done in FrameMaker offline, but quick fixes can be made in XML editor in AEM if needed.

Large and in Charge

How the US General Services Administration (GSA) uses DITA and the flexibility of Adobe FrameMaker to deliver content faster

Tom Aldous of The Content Era continued the event with his talk about his project with the U.S. GSA (General Services Administration), which is one of the largest government agencies in the U.S. and how it uses DITA and the flexibility of FrameMaker to deliver content faster. The GSA is an agency that manages government buildings, real estate, provides product and service procurement supplier information, and develops policies and regulations, the latter of which was the focus of this project.

The GSA was prompted to find a solution to manage the Federal Acquisition Regulations (FAR) used to codify uniform policies that are delegated to the Department of Defense, GSA, and NASA. The Federal Acquisition Policy Division writes and revises the FAR to implement laws, executive orders, other agency regulations, and government-wide policies. The focus was on finding something that would help with speed and cost to implement and create huge consistencies across many agencies and departments.

Over 1700 pages of acquisition regulations with thousands of internal and external cross-references needed to be converted, and the GSA found that using FrameMaker was easier to move from Word because FrameMaker handles DITA directly out of the box, so it’s easier to customize templates.

FAR migration needed to take unstructured FrameMaker to Structured FrameMaker, and it needed to figure out how to make that conversion. FrameMaker was able to XSL and ExtendScript/JavaScript conversions to make functionality possible.

DITA was chosen as the preferred method as the GSA did not want to be application vendor locked, so using FrameMaker was the best tool. DITA XML is a standard with flexibility which helped with specialization. As a result, published PDFs needed to match the publications that were put out before the transition. It was all about speed! Editing in WYSIWYG editor was easier for content, and FrameMaker could convert the content in DITA XML and XSL transformations using ExtendScripts.

Tom spent the rest of his talk showing a live demonstration of how FrameMaker was the “multi-wrench of documentation.” He showed how the structure for FAR was created for the acquisition policies and parts, and how the structure really played an important part in how the entire site was created, using ExtendScripts when possible. The ExtendScripts helped to automate certain processes to ensure that content stayed in a structured format and could dynamically update content including cross-references easily. Set-up was to create several automated scripts to streamline things the process.

Migrating to Structure

A Hands-On Live Demo with Bernard Aschwanden

Bernard Aschwanden’s presentation of how-to migrate to structured content was a typical Aschwanden presentation. It was very much an enjoyable and fast-paced hands-on demonstration in which Bernard recommended watching the actions, making notes of where certain actions were based on the time stamp of the forthcoming presentation video recording, and playing along later with his very short slide deck (found on SlideShare) and documents he could send you upon request to practice later. In that respect, he whizzed through, but it was easy to follow where he was going with the process and not getting bogged down in the deep details.

Bernard started with a standard Word paragraph, then creating a template that would be the foundation of the formatting. He then imported the Word file into FrameMaker. The next step was to move the content towards DITA by creating a customizable conversion table. He then showed how XML files can be migrated to the latest version of FrameMaker 2019. This demonstrated that you will be able to use the DITA XML files no matter what, and that these files can be reformatted and customized.

Bernard then showed how the information he created and customized in FrameMaker displayed in the AEM editor by syncing between the apps. His last demonstration was trying to show how to convert content in less than five minutes from scratch and create a conversion table not only formats the text, but also converts it into a topic at the same time.

Bernard is very good at showing how DITA is implemented at the most basic level, so I recommend that if you can, watch the recording—he goes fast, but it’s simple to follow!

Grab the Wheel and Drive Your Content to DITA!

How Networking Giant Ciena migrated to DITA successfully with a modest budget

Next, Susanna Carlisi of Ciena spoke about her success story of how her company migrated to DITA on a modest budget. Tom Aldous had helped with the project a bit, so he contributed to some of the more technical aspects of the talk.

Ciena is a networking systems, services, and software company. They had issues where users could not find the information they were looking for, didn’t know which content to use, different customers wanted different types of documents, and customers wanted documents customized to specific configurations. The key points of these issues were that there was no predictability in the content. Customers were frustrated in having to sift through many procedures in multiple locations to complete a single task, and some customers want libraries that would describe all the features a product supported.

The solutions were clear. They needed to improve consistency, content predictability, navigation, and usability. They also needed to provide a variety of documentation options, including the ability to customize quick start guides and HTML5 filters. To do this, they transitioned to topic-based authoring to enable all improvements. The result was improved consistency and content predictability, improved navigation by separating task, concept, and reference information, improved usability by guiding users to specific tasks, a variety of document options provided in different formats, and enabled content filtering through personalized, dynamic content in HTML5 output. DITA was chosen to support their goal because the standard was extensive, and they wanted to use what would work for them; Tom helped them figure out how to implement that. The idea was that once the topics were written and broken down to concepts, references, and tasks, they could be shared and reassembled for different outputs.

Structured FrameMaker is guided authoring. Tagging helped authors to go from the document view to the structured view, and the element catalog. Reasons that FrameMaker was chosen included the fact that it supports standard DITA out of the box, and included many advanced tools to help customize DITA template to their specific needs. Publishing worked as it did in unstructured formats to make PDFs and other outputs without a costly publishing solution, as the same content can be published via FrameMaker as HTML and EPUB. If you are using unstructured FrameMaker, transitioning to structured FrameMaker is building on an existing investment. Structured FrameMaker enforces consistency, and Adobe has huge resources to help with solutions.

Ciena’s transition strategy was to develop a foundational strategy, then create templates and do conversions in three phases. In the first phase, content and templates would be converted from unstructured FrameMaker docs to DITA maps. The second phase updated the converted content to add new DITA elements. The third phase revised and restructured topics. The next to last stage of the strategy involved refining metadata/reuse and FrameMaker variable condition strategy. The last step was CCMS implementation. Susanna felt that this phased approach is more realistic for smaller budgets and teams with limited resources.

Susanna finished up her talk by showing how a conversion of an original book was done through the preparation for the conversion, and then how the conversion was implemented. It doesn’t have to be hard or expensive! Susanna did it all on her own (with a little help from Tom).

Sharing Content across the Enterprise

Using the DITA Maturity Model to get the most out of your DITA implementation

Amber Swope talked about sharing content across the enterprise by using the DITA Maturity Model that she helped to develop to get the most out of your DITA implementations. Amber’s talk was impeded a bit by technical glitches that ate up a big chunk of her allotted time, but she made up with it without missing a beat!

With a talk that was sprinkled with Cher, KrisKross, Star Wars, Star Trek and LEGO references (how can you not like that?), she shared that sharing across an enterprise involves a common content collection and taxonomy, where users can contribute new information, coordinate usage, collaborate on structure, and cooperate to update and maintain content.

The challenges with this, though, are that there’s usually lots of information to manage that’s inconsistently structured, created in different formats, stored in different locations, owned by different organizations, with multiple instances of similar or same content, no process for sharing the content, and no common metadata.

What needs to change? The focus should be on eliminating deliverable-focused organizational ownership, moving to structured content, implementing consistent metadata, eradicating duplicate content, and developing processes for sharing. The vertical deliverable focus should be between marketing, tech pubs, customer support, education services, professional services, while horizontal content focus centers on product information, successful usage information, and recovery information. We can’t keep thinking of in terms of siloed content!

Amber defined structured content as typed, consistent, validated, identified, and versioned. She made an analogy about unstructured content to be viewed like a pile of LEGOs, while structured content is like LEGOs that can be pieced together, but can be combined in different ways.

Common metadata is about taxonomies—the classifications and associations that provide the backbone across the organization. Having that in order makes it easier to have one content source and single-source content. Effective sharing process is the solution; we can’t solve problems by using the same kind of thinking we used to create them. Working together to build content collection need contributors such as professional services, marketing, technical publications, customer support, and education services.

The process is to identify the required content, define the content structure, which helps to identify metadata values. Continue by configuring content management, which involves identifying who contributes what content types. This leads to determine who needs to review what content, negotiating how to update content, and develop publishing support.

Within the six levels, the upper levels are where everything comes together. Level 4 – the automation/integration level—dictates that multiple users can create/share/use content from multiple repositories. Multiple repositories contain multiple topics and results in speed and efficiency. This is where a CCMS would be required. At Level 5—the semantics on-demand level, investment is technical integration between all systems. It’s not easy to do and handled by multiple administrators. You must have taxonomy at this point. At Level 6, you need a universal semantic system. All content becomes usable by all stakeholders at this point, with the goal of universal knowledge management. That’s not completely realistic, but it’s the ideal. This is the point where changes in the organizational mindset about scalable architecture make a difference. FrameMaker is scalable out of the box in that respect.

Summary: Sharing content is more than reuse. Everyone contributes to the collection. It doesn’t happen by accident–you have to work together!

From a Foggy Labyrinth to a Bright Horizon

An effective road map for adopting DITA

To finish the day, Alessandro Stazi talked about the common roadblocks and excuses that companies of all sizes will use to dismiss DITA when they really need it! Alessandro broke these barriers down one by one, based on the myths perceived.

The first common barrier was the myth that companies get caught up with false buzzwords. The thought is that there are too many tags to use that cause limitation or too many tools to use. He disproved that by showing that FrameMaker 2019 can actually help with tag sets and set the DITA version you want to use. Between concept topics, tasks, and references, and a few other essential tags, not that many are actually used. While FrameMaker provides more than enough tags, it doesn’t mean you have to use them all! The standard ones are all supplied, and you only need to use those as they fit your needs.

The second common barrier is that there is a terrible learning curve to learning DITA. Alessandro said that if you were becoming a DITA master, then yes, that takes time and solid practice. But to start, only a few concepts are needed! It’s just a matter of learning how to do topic-based writing. Using the specialization of DITA, you can define other custom topic types, and define attributes of a topic which can indicate if a whole topic or only a specific part of it has to be translated or not. DITAmaps contain a hierarchy of links to a subset of topics. These provide mechanisms for content reuse using CONREF. Content filtering is done through use of metadata. Those are the basics at the most foundational level!

The third most common barrier is the thought that each company is “special” and so DITA would not be for their use case. The common misperception is that DITA is only for software companies, but it can easily be used for other cases like in manufacturing or other technology.

Alessandro explained that it’s usually at this point that he tells companies that are considering DITA migration about the advantages of DITA adoption, which are that DITA is an open source standard, gives you structured content, gives you content reuse, the standard means scalability (collaborative writing and agile) and saving money for translation, output to multi-channels; tags and metadata are semantic and automated, and DITA XML along with CSS provides separate contents. This lends itself very well to Information 4.0 standards that many companies want to employ!

Alessandro went into depth about the steps needed to adopt DITA if you can get past these barriers, but in summary, it came down to this:

Clarify your conversion goals.

Set up a small pilot team (usually a project manager, system and tools support, an information architect, a writer, and a team writer leader).

Define your pilot use case.

Assess the current state of your content.

Figure out how to schedule the conversion process—timing is everything!

Set conversion strategies.

Convert the content into DITA.

Just like Susanna’s experience at Ciena, he recommended an incremental adoption of DITA in its implementation and referred to Amber’s DITA Maturity Model as a guide. He also reminded us to remember metrics, as validates that DITA is better than unstructured content and zero reuse. DITA reduces general costs due to reuse and helps translation costs go down significantly.

Summary

Day 1 was a great kickoff all about the ways that DITA can be converted and used, with excellent use cases. If you aren’t already convinced of how DITA can significantly change how content can work better for your company after today, then you were asleep watching these presentations! I’m looking forward to seeing more about how DITA can be used.

Danielle M. Villegas is a technical communicator who has most recently worked with International Refugee Committee (IRC), MetLife, Novo Nordisk, and BASF North America, with a background in content strategy, web content management, social media, project management, e-learning, and client services. She is also an adjunct instructor at NJIT, and has he own consultancy, Dair Communications. Danielle is best known in the technical communications world for her blog, TechCommGeekMom.com, which has continued to flourish since it was launched during her graduate studies at NJIT in 2012. She has presented webinars and seminars for Adobe, the Society for Technical Communication (STC), the IEEE ProComm, the Institute for Scientific and Technical Communication (ISTC)’s TCUK conference, and at Drexel University’s eLearning Conference. She has written articles for Adobe, STC Intercom, STC Notebook, the Content Rules blog, The Content Wrangler, and InSyncTraining as well.
You can also follow Danielle on Twitter: @techcommgeekmom