James Lockman's Technical Blog

Posts tagged "Adobe Experience Manager"

For many versions, Adobe Experience Manager has included support for parsing InDesign documents via InDesign Server. AEM admins could use pre-built workflow steps to send an InDesign document or InDesign Snippet to InDesign Server along with a set of scripts that InDesign Server would execute against the payload. It was possible to execute multiple scripts in sequence on the same payload, which was handy but not particularly efficient as it would invoke InDesign Server as many times as you had scripts in the workflow. In AEM 6.3, the workflow component matured to make the workflows more efficient, and to include a set of functions that help InDesign Server access content in AEM and to post output documents back to AEM for further processing. In AEM 6.4, the workflow component added a configuration to permit any MIME-type as the payload for InDesign Server, opening up a whole new set of use cases for AEM and InDesign Server.

Scripts Deconstructed

The InDesign Server workflow component is called Media Extraction. It began life as a way to extract the text, images and metadata from InDesign documents, and it’s a core part of the built-in DAM Update Asset workflow today. Media Extraction has a lot of power as a workflow ingredient, however, if you know how to use it. Let’s explore how the Media Extraction workflow component works in AEM 6.4.

Media Extraction works by sending a payload to InDesign Server, consisting of a document and a script that InDesign Server executes on the document. As stated above, earlier incarnations only allowed the payload to include an InDesign Document (.indd) or InDesign Snippet (.idms), but 6.4 lets us send any document, as long as it passes our MIME-type filter. You can specify the MIME-type in the Process Arguments section of the workflow step. It helps to know the MIME-type of your content. You can use one of the many resources online to help identify common MIMI-types, but you may want to upload a file of the desired type to AEM and then examine its /jcr:content/metadata/dam:MIMEtype node to see what AEM thinks it is.

Specify the allowed MIME types for your scripts. The default is InDesign and InDesign Snippet.

You will also need to send a script that can process the payload and return the resulting file back to AEM. The Media Extraction workflow component reads and sends .jsx files, which contain the actual script code, to your InDesign Server. The built-in scripts are located at /libs/settings/dam/indesign/scripts/ and you should not move or change them. You can copy them to /apps/settings/dam/indesign/scripts/, or leave them in place and put your own scripts in /apps/settings/dam/indesign/scripts/. The critical thing to know is that the .jsx files are actually script fragments, and that they are designed to be concatenated into one script at runtime.

Scripts are concatenated from top to bottom in the list of scripts specified in Process Arguments

There are four sections in Process Arguments: ExtendScript Library, Init Script, Extend Scripts, and Cleanup Script. It is not recommended to modify the ExtendScript Library, located at /libs/settings/dam/indesign/scripts/cq-lib.jsx, as it provides important functions related to processing the inbound payload and to returning the resulting file back to AEM. Read and understand the helper functions provided by the ExtendScript library; you will be glad you did.

If you look at the default Init Script, located at /libs/settings/dam/indesign/scripts/Init.jsx, you’ll see that it contains an unclosed try {. This try { encloses the scripts indicated in the Extend Scripts section, and it closes in the Cleanup Script, located at /libs/settings/dam/indesign/scripts/Cleanup.jsx It continues with a catch{}, as expected, for error handling. This means that each of the Extend Scripts can leverage the work done and functions defined by the preceding scripts, including the ExtendScript Library and the Init Script, as the workflow component will combine the jsx files into one before sending the single, combined script to InDesign Server.

If you do not specify an Init Script and a Cleanup Script, the Media Extraction component will use the default scripts. Study these two scripts to see how to prepare to handle the inbound payload, how to process errors, and how to clean up the temporary files mess left during your processing. It is a good idea to use the existing Init.jsx and Cleanup.jsx files as the starting and ending points for your solution, so make copies (and name them something that stands out!) in /apps/settings/dam/indesign/scripts/ and modify those for production.

Example, please

Let’s look at an example called IDSBasedThumbnails, that you can download and install from github. The package contains the scripts and a workflow model, which performs the following actions on PDF, AI, PS or EPS files:

Sends the file to InDesign Server as a payload

Places, scales and centers the document on a new InDesign document

Exports the new document as thumbnails (PNG and JPEG)

Puts the exported files back in the repository at /jcr:content/renditions/

Wipes out the debris and closes InDesign Server

You might be asking why would anyone want to do this? Well, it turns out that AEM doesn’t have native rendering for EPS files, and the default DAM Update Asset workflow uses ImageMagick to generate previews from EPS files. I thought that it would be better to use InDesign Server, as it can handle not only EPS files, but also PDF, AI, PS and a whole set of other asset types. In addition, InDesign can simulate overprints and flatten complex transparency during the export, which makes it a very accurate way to deliver color-managed previews for assets used in printing processes. Think of packaging, where there’s a lot of use of overprinting varnishes and spot colors. Also, InDesign Server is super fast at making these renditions, operates as a dedicated image processing server, and can scale to meet demand without impacting the AEM Server. Let’s dig in to the workflow and see some example output.

MIME-type and ExtendScripts for making thumbnails with InDesign Server

As you can see above, we have many allowed MIME-types to cover the various assets we want to preview. If you try to run the workflow on a Word document, it will not work, as it won’t pass the initial MIME test. We’ve left the ExtendScript Library alone, but we’ve made new Init.jsx and Cleanup.jsx files that focused specifically on non-InDesign documents as payloads. The bulk of the work happens in EPSThumbnailExport.jsx, and we’ll highlight some of that script here.

The function called exportThumbnail() does the most of the work, and there’s a helper function called myGetBounds() at the end that returns the dimensions of the rectangle contained within the margins of the page. I’ve not included that below. Also, I’ve included comments to help explain what each of the sections of code is doing. Know that many of the inputs of the exportThumbnails() function are defined by the ExtendScript Library and the Init Script, which is why those are so important to read and understand.

Once this script completes and all of the images have been written back to AEM via the putResource() calls, the Cleanup Script runs.

The result of this script is that all of the thumbnails for the specified Asset in DAM have been replaced with new thumbnails generated by InDesign Server. Here are some before and after images to give you an idea of the difference and why this model could be useful.

Here are two EPS files and one PDF uploaded to DAM. ImageMagick preview has failed to generate a preview of the EPS files, and the PDF file shows no overprinting.

In order to run the workflow, you need to have InDesign Server installed, running, and your AEM instance needs to be configured to use InDesign Server. You can either open the workflow called DAM Update Asset with IDS Previews and run it on an asset from the Workflows panel, or you can open an asset and choose Run Workflow from the bottom of the Timeline panel for a specific asset. As configured, the workflow can’t run on a folder, since the MIME-Type filter doesn’t pass folders, so you need to run it one at a time on each asset. When you do, you will see the following result, and pay close attention to the difference in the PDF thumbnail:

Once the workflow generates previews, the new thumbnails replace the existing thumbnails with color accurate, overprint-simulated previews.

The PDF thumbnail now properly respects the overprint settings in the PDF, as well as in the EPS file. This is critical in managing assets that are designed to support print workflows that make use of overprinting and multi-ink composite colors, such as packaging and book covers. You might be wondering about why the previews for the CMYK Overprints.pdf and CMYK Overprints.eps are cropped differently. This is due to the way that InDesign interprets artwork boundaries when it imports assets. InDesign uses the page boundaries as defined in the EPS file when placing onto the page. PDF files can and often do have a number of boundaries available. InDesign, by default, will select the Bounding Box (Visible Layers Only) if it is available. This box is defined by the authoring application and typically exactly bounds the edges of any visible objects on the page as determined by layer visibility. You can learn more about PDF Bounding Boxes at this InDesign Secrets article.

InDesign defaults to the Bounding Box (Visible Layers Only) when importing PDF. You need to adjust the import preferences in your script if you want to change the default PDF import behavior.

The bounding boxes constants are: PDFCrop.cropPDF, PDFCrop.cropArt, PDFCrop.cropTrim, PDFCrop.cropBleed, PDFCrop.cropMedia, PDFCrop.cropContentAllLayers, PDFCrop.cropContentVisibleLayers. You can add a line before myImageFrame.place(sourceFile) to change the behavior to match how InDesign imports EPS files:

app.pdfPlacePreferences.pdfCrop = PDFCrop.cropMedia;

If you make the change, you will need to save the JSX, then reimport the JSX to your workflow, then re-sync your workflow in order for it to become available. Importing the JSX can be a confusing step, so let’s discuss that briefly. The built-in asset browser for JSX files doesn’t let you select a JSX from the file tree. It’s a known issue and it will be fixed in a later version of AEM, but for now, the Search bar is your best friend. Just enter the name of the JSX you want to import, and it’ll appear in the search results. Select it, and you’re all set.

Use the Search bar to reimport your modified JSX to the workflow step.

Once you re-import the JSX, the change is automatically saved to the workflow, but the workflow needs to be synced to become active. Once you tap the Sync button, you’re ready to go.

Be sure to tap Sync after updating the workflow step.

After you update the JSX and re-sync the workflow, the PDF and EPS thumbnails will be similar.

You could also modify the DAM Update Asset workflow to remove ImageMagick and/or the built-in PDF renderers and replace them with a new step. You would likely want to expand the script to handle multi-page PDF and AI files, however. If you’d like to explore this option, here’s a great starting point from Mike Edel for importing multi-page PDF and AI into InDesign via scripting.

Conclusion

Being able to use InDesign Server to generate better previews for EPS and PDF and AI files is a nice new benefit of the new MIME-type options in the Media Extraction workflow. This is a relatively trivial example of what a developer can do with this new capability, however. You could create a workflow that sends a whole package of items to InDesign Server, which would do some action on those items, and then return a new file or other data to AEM. Integrators can develop new editorial and creative tools based on this new capability to enhance existing inDesign documents or create entirely new ones from scratch. We hope you will be inspired to add more InDesign Server to your AEM Assets workflows.

Adobe Summit 2018 has come and gone, but the sessions live on. I had the pleasure of presenting with a customer and some colleagues during the conference, and I wanted to share some thoughts about those sessions.

Adobe Named User Licensing is here to stay, and access to Adobe tools and services will require logging in to the tool or service. Customers manage their users and those users’ entitlements to Adobe tools and services in the Adobe Admin Console. One of the challenges large organizations face is how to manage those users at scale. Adobe provides a User Management API that customers can use to build integrations between their Enterprise user management system and Adobe’s Admin Console, but building and maintaining that integration is often more than an IT group wants to own. Our team built the User Sync tool, which requires configuration and no custom development, to close the gap. This session reviews the different methods available to Enterprise customers to manage users in the Admin Console, including and highlighting the User Sync Tool. Kevin Bhunut and Andrew Dorton explain and demonstrate the tool in action, and there is a lively Q&A session.

A solid metadata strategy is essential to any successful DAM implementation. We invited Adam Crane from Dell to share learnings and best practices from their AEM Assets implementation journey. He tells a compelling story and offers up some insightful take-aways that are valuable to someone just starting their DAM journey as well as to folks who are at different stages along the way. No GPS required!

Relieving the pressure caused by Content Velocity demands improved collaboration between Creatives and Marketers. AEM Superstar Ian Reasor and I explore methods to expose additional metadata in AEM Assets, Adobe Bridge, and Adobe Creative Cloud desktop applications via FileInfo. We show how to reveal AEM Tags on assets in Adobe Bridge via the AEM Tags Panel for Bridge, and we dig in to some new features in AEM 6.4, including Cascading Metadata form fields, Custom Smart Tags, and Adobe Asset Link.

My developer team and I recently worked with the Bridge team on a new feature in Bridge CC called Custom Keywords. It’s a deceptively simple yet very powerful tool that lets customers expose a controlled metadata taxonomy to end users, which enforces Enterprise metadata standards and reduces metadata madness for the Enterprise content manager.

You’re probably familiar with Keywords, which is a staple metadata interface across the Creative Cloud applications. Keywords is a flexible interface to a set of defined hierarchical tags that users can define and manage. This is great, because as new classifications arise, the Creative can just add a new keyword in the right place in their hierarchy, and they’re done.

Under the hood, regular Keywords is an interface to the “subject” property in the Dublin Core schema. The interface is driven by an XML file, and the Bridge interface also includes an editor that lets Creatives manipulate the keywords to their own taste. Because it’s driven by an XML file, it’s possible for an Enterprise to create a Keywords file and propagate that to everyone who needs them. Unfortunately, since Keywords can be changed easily by the Creative, their use can lead to confusion and inconsistency when assets arrive on the Asset Manager’s desk.

Enterprise Asset Managers want to be able to control how assets are classified in their Digital Asset Management system (DAM). They do this to help improve search and discovery, to help manage asset lifecycles, and to ensure that teams are using the same vocabulary when they are classifying assets. Most DAM systems have locked taxonomies for assets. These taxonomies are exposed to end users through some kind of interface, usually but not always in a web browser. Many DAM systems leverage the extensibility of XMP (eXtensible Metadata Platform) and record the taxonomic data directly on the assets in custom namespaces. This is a very common practice, and it’s exactly the use case for which XMP was designed. How, then, to expose custom metadata to a Creative?

Time and time again, we hear of Creatives ignoring or outright rejecting a DAM because they need to interact via a web browser. Building custom metadata panels in CC desktop applications is a very viable option, but it depends on using two pathways: FileInfo Panels or Extensions. To make matters more complex, Bridge (until now!) has relied on an different Extension and FileInfo panel architecture from the rest of the CC apps, so it was hard to build consistent experiences across all applications. Now that Bridge CC supports the Adobe Common Extensibility Platform (CEP) and is using the latest FileInfo infrastructure, it is much easier for developers to build custom panels for Bridge and other CC applications.

This takes us back to the Keywords panel, which is designed to be super easy to use and doesn’t require a developer to make it work. We wanted to provide something almost as easy as Keywords, but focused on controlled taxonomy for Enterprise use cases. In specifying the new feature, we had a few key requirements:

Must be able to specify a custom URI and property

Must have check boxes like Keywords

Must be able to support multiple values like Keywords

Must be able to support hierarchy like Keywords

Must be able to support human readable text for any value

Must be able to support custom hierarchical separators

Must be a single, easy to compose XML file

The Bridge team delivered on every one of these requirements with Custom Keywords. Each Custom Keywords panel is made via a single XML file. I’ve included a CustomKeywordsExample.xml file for your reference. Bridge supports up to 10 Custom Keywords XML files. Each XML needs to have a unique file name and point to a unique URI. A later release will support multiple properties in the same URI, however. These need to be placed in the Bridge Preferences folder, in a folder called CustomKeywordsPanel.

On Mac, this folder is located at ~/Library/Application Support/Adobe/Bridge CC 2018/CustomKeywordsPanel
On Windows, this folder is located at c:\Users\<username>\AppData\Roaming\Adobe\Bridge CC 2018\CustomKeywordsPanel
Bridge keeps version-specific preferences, so as Bridge updates, you will need to look for your specific version of Bridge and then make a new folder called CustomKeywordsPanel to store your XML files.

The panel XML starts with <CustomKeywordsPanel>, and then there’s a section called <PanelInfo> where you define the basic properties of the panel.

<PanelName> is the title at the top of the panel and in all menus

<Description> describes the panel

<Namespace> is the URI and prefix of the namespace you want the panel to read and write

<NamespaceProperty> is the property in the namespace you want to read and write with the panel

<FirstHierarchyDelimiter> separates the top level of the hierarchy from the rest of the hierarchy. It’s optional

<IncludeFirstHierarchyDelimiter> is a boolean that tells Bridge whether to use the FirstHierarchyDelimiter

<HierarchyDelimiter> is the hierarchy delimiter and is required

Next comes the taxonomy, which is contained in a section called <keywords version =”2″>. The taxonomy is defined by <set> and <item> tags. <item> is optional and is used at the bottom of the hierarchy for any multi-level hierarchy. <set>s contain other <set>s and can terminate with an <item>, if you choose to use <item>. Each panel needs at least one <set>. Multiple <set>s in one panel is supported.

Each <set> has a required text property called “name.” Bridge displays the name in the panel and also writes the name as the value into the XMP. If you provide an optional “value,” then Bridge will write the value but will display the name.

For instance, the following defines a panel that will read and write from tags defined by Adobe Experience Manger (AEM), which stores tags in XMP on assets managed by AEM. In this panel, we’ve restricted the panel to show only tags from one top-level category (We.Retail), which contains two second-level categories (Activity and Apparel) and their respective tags.

The panel works just like Keywords, but you can’t change the keywords themselves. You can enable or disable them by checking and unchecking the box next to the keyword. You can make and apply Metadata Templates from them. You can see them in FileInfo when you look at the raw data. Most importantly, they provide users with an easy, natural way to interact with custom metadata on assets that support XMP.

Now, on its own, this new feature is pretty awesome. Where it gets a little complicated for an Enterprise Asset Manager is maintaining the XML file and pushing it to her end users and partners. If the taxonomy is large or changes often, then it can be a challenge to both create and update the XML. Recognizing this challenge, my team developed the AEM Tags Panel for Bridge CC.

The AEM Tags Panel for Bridge CC is an extension for Adobe Bridge that creates and installs a Custom Keywords XML file specifically for AEM Tags. A user logs in to AEM in the panel and selects the AEM Tags Namespaces they want in their Custom Keywords panel. When the user clicks “Generate Panel,” the AEM Tags Panel generates the appropriate Custom Keywords XML file and puts it in the right folder. When you restart Bridge or open a new Bridge window, the panel loads. The AEM Tags Panel also includes an Export option so that an administrator can create the XML and distribute it to users inside and outside of the Enterprise, even if they don’t have access to AEM.

Internal users will likely have access to AEM via a web browser or through the AEM Desktop tool. AEM Desktop Tool lets a user mount the AEM Assets repository as if it were a file system. Users can then interact with assets with their Creative Cloud desktop applications like they would any other assets. If they access AEM via a browser, then they will likely search for assets and download assets to their desktop for use. In either case, Bridge can “see” the assets, since they are local from Bridge’s point of view. Any assets from AEM likely have AEM Tags, which will now become visible when you install a Custom Keywords XML file.

External users, such as agencies and photographers, likely won’t have direct access to AEM, but they will be sending assets to the Asset Manager, who will eventually load them into AEM. If the Asset Manager shares a Custom Keywords XML file with the external users, then they can apply AEM Tags to the assets before they send them to the Asset Manager. The external user doesn’t need access to AEM to apply AEM Tags.

You can view a video of the whole process below.

We hope that you’ll take advantage of this great new feature of Bridge, and that if you have AEM, you’ll consider using our new AEM Tags extension to extend your Tags beyond the DAM.

I have the good fortune to own a Xerox DocuColor 3535 printer. In its day, it was a workhorse, delivering oversized tab pages in full color at blazing speeds (for the early 2000s). Mine happens to be the one with the embedded Fiery Controller, which had a lot of features, but wasn’t considered as robust as the external Fiery EX3535 or the Creo Spire RIPs that were available at the same time. Nevertheless, this machine is a trooper, producing high quality output year after year.

Unfortunately for me but also very understandably, Xerox stopped supporting this printer a long, long time ago. As a result, the last supported MacOSX version that works with the printer is 10.5. Yikes! Fortunately for me, the Fiery Command Workstation Java app still works, and it allows me to download PostScript and PDF files to the printer. The result is a clunky workflow that requires me to print to PS files on my desktop, as PDF jobs above the Acrobat 5 days will fail due to transparency and other issues, and then manually load them to the printer. This workflow was acceptable, and since it let me eke another few years out of the printer, I was not complaining.

Until yesterday, when I needed to print an XFA PDF from Acrobat. What the heck is an XFA PDF, you ask? Well, let’s gather ’round the fire for a moment and we’ll talk about it.

All PDF files aren’t created equally. Acrobat and Reader are very good at hiding this from you, which is by design. From Adobe’s perspective, the user shouldn’t need to worry whether a digital document was made using traditional PDF methods such as Distiller, a print driver, using a PDF library, or some other standards compliant method. For most applications, PDF is a way to describe the content and geometry of a document. Its roots are in PostScript, so it is no surprise that PDF is often viewed as synonym for digital paper. Over time, PDF evolved to include many interactive features such as the ability to play video, run JavaScript, and even play Flash content. Even with the interactive features of a rich PDF, however, PDF is really the closest we can get to Harry Potter paper. The pages all have a definite size, the fonts don’t change when you reorient the reading application, and the experience is definitely not responsive like a web page. This is OK, though, as PDF in its current form for most people is really about paper replacement, and as such there is no equal to PDF.

Now, Adobe gave PDF to the world as a standard in 2007 as ISO 32000-1 to promote the broader adoption of the format and to encourage companies to build solutions that can consume PDF built using traditional methods. You can purchase a PDF of the PDF Standard at the ISO 32000-1:2008 specification download page or download a free PDF version of the PDF Spec at the Adobe Developer Connection. Kind of meta, right? Also included in the PDF specification is a section about forms. As you are likely aware, PDF files can also behave like forms, and there is a forms editing capability in Acrobat that’s designed to help convert a paper form into a digital form. Using Acrobat’s built-in tools, you can take a picture of or scan a paper form, prepare the form, type on it to complete it, and then send it for electronic signature. Pretty awesome, if all you want to do is replicate a paper process.

Now, deep in the PDF specification is a section about XFA, or XML Forms Architecture. XFA is a PDF variant that is the basis of JetForms’Accelio’s Adobe’s LiveCycle solution, which is now known as Adobe AEM Forms. The idea is that a document could be written not as something based on a page description like a sheet of digital paper, but rather as a structured array of content that could be rendered on the fly by Acrobat or other rendering technology. It was designed for forms, because in many cases, form responses were longer or larger than the space provided. With XFA, the form can just magically get longer to accommodate. It also allowed forms designers to include interactive and design features for the person who completes the form, such as buttons to add and delete sections or fields to a form, network connections to database solutions so that the form can have up-to-date content, and much more. This all sounds amazing, right?

While Acrobat can make a form using form fields, these Acrobat-made form fields are fixed on the page in location and dimension, and the average user can’t modify the layout of the page to accommodate more content. Acrobat can’t make an XFA form, but it can read, display and XFA forms. In order to make XFA forms, you need to use the Adobe LiveCycle Forms Designer or make it through automation using AEM Forms. LiveCycle Designer was previously included as a component of LiveCycle and in other desktop software bundles, but it is now only available to LiveCycle and AEM Forms customers. Why, you ask? XFA is used heavily by Insurance, Financial Services, Health Care and Government customers who use the business process, security, digital signature, document automation, system integration, and other capabilities of LiveCycle or AEM Forms. In addition, while XFA is included in the PDF specification, few other companies have invested resources in developing solutions around XFA PDF. This includes reading and viewing and interacting with an XFA PDF, so the only way to read, view and interact with an XFA PDF is to use Acrobat or Reader on a desktop computer. This is just fine when the intent is to enable workers in an Enterprise to engage with Enterprise business process using complex forms, but for the general user, it’s overkill.

This doesn’t mean that companies don’t use LiveCycle Designer to make standalone forms, which takes us back to the original premise. The Boy Scouts of America uses LiveCycle Designer to produce a the forms that Scouts use to manage and make the final reports for their Eagle Scout project. This form is great, because the Scout can use it as a notebook for their project. It includes fields with text, tables and photographs, and it allows the Scout to add and remove fields as necessary to accommodate the details of their project. For my son, this document grew to 34 pages and over 30 MB due to the inclusion of many photographs of his project. Now, even though the form is electronic, the local group that reviews the Eagle Rank Advancement wants a printed binder that includes these 34 pages as well as some other content, which is why I needed to print the PDF in the first place.

Printing the PDF proved to be very challenging. My usual method of uploading the PDF to the Fiery didn’t work, since the Fiery doesn’t support XFA PDF. I knew this ahead of time, so I tried to convert to PostScript from Acrobat. This also didn’t work. I knew that I could print to my inkjet printer without issue, so I decided to see if I could print to the 3535.

I remembered that Apple’s printing system is based on CUPS, their open source *nix printing architecture. I also know that it supports a wide array of network connections and adheres to the PostScript Printer Description model of defining the capabilities and limitations of a printer. I knew that while the printer has a built-in AppleTalk server and a built-in port 9100 server, neither of these connections work with modern Mac OS. I also remembered that the printer makes a Windows printer, and I was unsuccessful in printing to this printer.

I wanted to look at the options for the printer, which means either using the embedded web server (which doesn’t support anything beyond Internet Explorer on a Mac. Seriously.) or using Command Workstation 5. The Printer Setup utility requires the Apple-provided Java 6, so I needed to install that in addition to Java 8 (which I use for other applications, including Adobe Experience Manager). Now that I had Java 6 installed and Command Workstation 5 installed, I found the LPD and IPP options buried under the Service2 tab of the Network Setup. Nice! I ensured that these were enabled and went back to my Mac to try an add the printer. With the proper PPD in hand (I downloaded the software installer from Xerox), I opened up System Preferences and then the Printers & Scanners option, then clicked the + button to add a printer. I tried first with IPP, and I was unable to add a printer. I next tried with LPD, and again was unable to add a printer. In both cases, you need to specify the IP address, the queue you want to target (in my case I want the hold queue), and the PPD. Then I remembered that the printing system is CUPS, and that CUPS has a console.

The web interface to CUPS is off by default. You can enable it by going to Terminal and running the command “cupsctl WebInterface=yes”

Now that the CUPS web interface is enabled, open http://localhost:631 and you will see CUPS in all its 1994-styled glory, complete with buttons and hyperlinks that all tell you exactly what they will do. This interface is designed to be USEFUL, not pretty, so don’t go all UX on me now. You want the Administration tab, so click it and then click on Add Printer under the Printers section. You will need to enter your administrator’s user name and password, which is expected. You will now see several sections, including your installed printers, printers that CUPS can see, and also a the bottom, the Other Network Printers section. You want to click the radio button (I told you it was antique) next to LPD/LPR Host or Printer, then click the Continue button.

Select LPD/LPR Host or Printer

On the next screen, enter the complete URI for your printer, including protocol and queue. For me, I used “lpd://192.168.1.22/hold” There are three queues available on this device: “lpd://192.168.1.22/direct” “lpd://192.168.1.22/hold” and “lpd://192.168.1.22/print” Enter your URI, then click the Continue button.

Enter your URI with protocol and print queue

On the sext screen, enter a name and description for the printer. The name needs to be web friendly, so no spaces or slashes or hashes. If you want, you can also share the printer so others in your house or work group can access the printer. If you do, then your computer will become a print spooler for the Xerox machine, so be prepared for network activity if you’re in a company or group with several folks who’ve been jonesing to print to your 3535. IN addition, you will need to go back to the CUPS Administration page and enable the “Share printers connected to this system” option, which will force a restart of CUPS on your computer. When you’ve finished debating the pros and cons of becoming a print server, click the Continue button.

Add a name and description to your printer.

Now, this is the part where you need your PPD. Click the Choose File button, and browse to your languishing PPD from the turn of the century. Once you’ve selected click Create Printer.

The final step is to add your PPD.

Voila! You now have a functional printer that prints to the Xerox 3535 embedded Fiery hold queue. You should see a page asking you to set the default options for the printer, which are defined by the PPD. These will apply to every job you send if you do not override the defaults, so it’s a good idea to browse through the settings one by one and tune them to your specific setup. Once satisfied, click the Set Default Options button.

Set the default printer options for your printer

After you set the default options, you should send a test page. Return to the CUPS page and click on the Printers tab, then on your newly minted printer. You should see two drop-down menus under the printer status line. Click Maintenance and then choose Print Test Page. This will send a test page to your 3535’s hold queue. You’ll need to go to Command Workstation to verify that the page was sent, but can get instant satisfaction if you built a printer that points to the print queue instead of the hold queue.

Be sure to print a test page to validate your setup.

All of this work was to print an XFA PDF, remember? Heading back to Acrobat, I was able to print a copy of my son’s Eagle Scout paperwork lickety split on my very old, out of support PostScript laser printer. If you’ve got one of these or other older seemingly unsupported PostScript printers lying around, power them up and see if you can use CUPS and a PPD to get them back in service again.

I presented a talk entitled Creating Content for Mobile Apps with Adobe Digital Publishing Solution at Adobe MAX 2015, and the recording is available here.

While the Adobe Digital Publishing Suite is a magazine platform that had many Enterprise applications, the Adobe Digital Publishing Solution (DPS 2015) is an Enterprise platform that can be used for magazines. Magazines are one of many use cases for which DPS 2015 is appropriate, and in this session, we see how to use a number of content sources in addition to InDesign to make articles for DPS 2015. These include but are not limited to Adobe Experience Manager, Adobe Muse, Adobe Dreamweaver, Adobe Captivate, Adobe FrameMaker and even Adobe Acrobat.

It is clear that content producers have more choices than ever when it comes to making, managing and deploying content to readers on tablet and mobile devices. Adobe DPS 2015 offers Enterprises a flexible, measurable and cross-platform solution for a wide range of communications use cases.