DOM – Thoughts From Erichttps://meyerweb.com/eric/thoughts
Things that Eric A. Meyer, CSS expert, writes about on his personal Web site; it's largely Web standards and Web technology, but also various bits of culture, politics, personal observations, and other miscellaneous stuffTue, 27 Feb 2018 17:16:26 +0000en-UShourly1Element Dragging in Web Inspectorshttps://meyerweb.com/eric/thoughts/2017/01/18/element-dragging-in-web-inspectors/
https://meyerweb.com/eric/thoughts/2017/01/18/element-dragging-in-web-inspectors/#commentsWed, 18 Jan 2017 15:30:51 +0000http://meyerweb.com/eric/thoughts/?p=3752Youtube: “Dragging elements in Firefox Nightly’s Web Inspector”
Since I recorded the video, I’ve learned that this same capability exists in public-release Firefox, and has been in Chrome for a while. It’s probably been in Firefox for a while, too. What I was surprised to find was how many other people were similarly surprised that this is possible, which is why I made the video. It’s probably easier to understand to video if it’s full screen, or at least expanded, but I think the basic idea gets across even in small-screen format. Share and enjoy!]]>https://meyerweb.com/eric/thoughts/2017/01/18/element-dragging-in-web-inspectors/feed/8Undoing oncut/oncopy/onpaste Falsitieshttps://meyerweb.com/eric/thoughts/2015/07/07/undoing-oncutoncopyonpaste-falsities/
https://meyerweb.com/eric/thoughts/2015/07/07/undoing-oncutoncopyonpaste-falsities/#commentsWed, 08 Jul 2015 02:13:46 +0000http://meyerweb.com/eric/thoughts/?p=3343Ryan Joy’s excellent and deservedly popular tweet, I wrote a small, not-terribly-smart Javascript function to undo cut/copy/paste blocking in HTML.]]>Inspired by Ryan Joy’s excellent and deservedly popular tweet, I wrote a small, not-terribly-smart Javascript function to undo cut/copy/paste blocking in HTML.

Here it is as a bookmarklet, if you still roll that way (as I do):
fixCCP. Thanks to the Bookmarklet Maker at bookmarklets.org for helping me out with that!

If there are obvious improvements to be made to its functionality, let me know and I’ll throw it up on Github.

]]>https://meyerweb.com/eric/thoughts/2015/07/07/undoing-oncutoncopyonpaste-falsities/feed/4Invented Elementshttps://meyerweb.com/eric/thoughts/2012/03/23/invented-elements/
https://meyerweb.com/eric/thoughts/2012/03/23/invented-elements/#commentsFri, 23 Mar 2012 14:16:58 +0000http://meyerweb.com/eric/thoughts/?p=1703a pointer to TypeButter, and I’m very interested by how TypeButter accomplishes its kerning.]]>This morning I caught a pointer to TypeButter, which is a jQuery library that does “optical kerning” in an attempt to improve the appearance of type. I’m not going to get into its design utility because I’m not qualified; I only notice kerning either when it’s set insanely wide or when it crosses over into keming. I suppose I’ve been looking at web type for so many years, it looks normal to me now. (Well, almost normal, but I’m not going to get into my personal typographic idiosyncrasies now.)

My reason to bring this up is that I’m very interested by how TypeButter accomplishes its kerning: it inserts kern elements with inline style attributes that bear letter-spacing values. Not span elements, kern elements. No, you didn’t miss an HTML5 news bite; there is no kern element, nor am I aware of a plan for one. TypeButter basically invents a specific-purpose element.

I believe I understand the reasoning. Had they used span, they would’ve likely tripped over existing author styles that apply to span. Browsers these days don’t really have a problem accepting and styling arbitrary elements, and any that do would simply render type their usual way. Because the markup is script-generated, markup validation services don’t throw conniption fits. There might well be browser performance problems, particularly if you optically kern all the things, but used in moderation (say, on headings) I wouldn’t expect too much of a hit.

The one potential drawback I can see, as articulated by Jake Archibald, is the possibility of a future kern element that might have different effects, or at least be styled by future author CSS and thus get picked up by TypeButter’s kerns. The currently accepted way to avoid that sort of problem is to prefix with x-, as in x-kern. Personally, I find it deeply unlikely that there will ever be an official kern element; it’s too presentationally focused. But, of course, one never knows.

If TypeButter shifted to generating x-kern before reaching v1.0 final, I doubt it would degrade the TypeButter experience at all, and it would indeed be more future-proof. It’s likely worth doing, if only to set a good example for libraries to follow, unless of course there’s downside I haven’t thought of yet. It’s definitely worth discussing, because as more browser enhancements are written, this sort of issue will come up more and more. Settling on some community best practices could save us some trouble down the road.

Update 23 Mar 12: it turns out custom elements are not as simple as we might prefer; see the comment below for details. That throws a fairly large wrench into the gears, and requires further contemplation.

]]>https://meyerweb.com/eric/thoughts/2010/04/07/turning-web-video-on-its-head/feed/15Pseudo-Phantomshttps://meyerweb.com/eric/thoughts/2009/11/03/pseudo-phantoms/
https://meyerweb.com/eric/thoughts/2009/11/03/pseudo-phantoms/#commentsTue, 03 Nov 2009 17:41:10 +0000http://meyerweb.com/eric/thoughts/?p=1180
In the course of a recent debugging session, I discovered a limitation of web inspectors (Firebug, Dragonfly, Safari’s Web Inspector, et al.) that I hadn’t quite grasped before: they don’t show pseudo-elements and they’re not so great with pseudo-classes. There’s one semi-exception to this rule, which is Internet Explorer 8’s built-in Developer Tool. It shows pseudo-elements just fine.

Here’s an example of what I’m talking about:

p::after {content: " -\2761-"; font-size: smaller;}

Drop that style into any document that has paragraphs. Load it up in your favorite development browser. Now inspect a paragraph. You will not see the generated content in the DOM view, and you won’t see the pseudo-element rule in the Styles tab (except in IE, where you get the latter, though not the former).

The problem isn’t that I used an escaped Unicode reference; take that out and you’ll still see the same results, as on the test page I threw together. It isn’t the double-colon syntax, either, which all modern browsers handle just fine; and anyway, I can take it back to a single colon and still see the same results. ::first-letter, ::first-line, ::before, and ::after are all basically invisible in most inspectors.

This can be a problem when developing, especially in cases such as having a forgotten, runaway generated-content clearfix making hash of the layout. No matter how many times you inspect the elements that are behaving strangely, you aren’t going to see anything in the inspector that tells you why the weirdness is happening.

The same is largely true for dynamic pseudo-classes. If you style all five link states, only two will show up in most inspectors—either :link or :visited, depending on whether you’ve visited the link’s target; and :focus. (You can sometimes also get :hover in Dragonfly, though I’ve not been able to do so reliably. IE8’s Developer Tool always shows a:link even when the link is visited, and doesn’t appear to show any other link states. …yes, this is getting complicated.)

The more static pseudo-classes, like :first-child, do show up pretty well across the board (except in IE, which doesn’t support all the advanced static pseudo-classes; e.g., :last-child).

I can appreciate that inspectors face an interesting challenge here. Pseudo-elements are just that, and aren’t part of the actual structure. And yet Internet Explorer’s Developer Tool manages to find those rules and display them without any fuss, even if it doesn’t show generated content in its DOM view. Some inspectors do better than others with dynamic pseudo-classes, but the fact remains that you basically can’t see some of them even though they will potentially apply to the inspected link at some point.

I’d be very interested to know what inspector teams encountered in trying to solve this problem, or if they’ve never tried. I’d be especially interested to know why IE shows pseudo-elements when the others don’t—is it made simple by their rendering engine’s internals, or did someone on the Developer Tool team go to the extra effort of special-casing those rules?

For me, however, the overriding question is this: what will it take for the various inspectors to behave more like IE’s does, and show pseudo-element and pseudo-class rules that are associated with the element currently being inspected? And as a bonus, to get them to show in the DOM view where the pseudo-elements actually live, so to speak?

(Addendum: when I talk about IE and the Developer Tool in this post, I mean the tool built into IE8. I did not test the Developer Toolbar that was available for IE6 and IE7. Thanks to Jeff L for pointing out the need to be clear about that.)

]]>https://meyerweb.com/eric/thoughts/2009/11/03/pseudo-phantoms/feed/19JavaScript Will Save Us Allhttps://meyerweb.com/eric/thoughts/2008/10/22/javascript-will-save-us-all/
https://meyerweb.com/eric/thoughts/2008/10/22/javascript-will-save-us-all/#commentsWed, 22 Oct 2008 13:05:57 +0000http://meyerweb.com/?p=953
A while back, I woke up one morning thinking, John Resig’s got some great CSS3 support in jQuery but it’s all forced into JS statements. I should ask him if he could set things up like Dean Edwards‘ IE7 script so that the JS scans the author’s CSS, finds the advanced selectors, does any necessary backend juggling, and makes CSS3 selector support Transparently Just Work. And then he could put that back into jQuery.

And then, after breakfast, I fired up my feed reader and saw Simon Willison‘s link to John Resig’s nascent Sizzle project.

I swear to Ged this is how it happened.

Personally, I can’t wait for Sizzle to be finished, because I’m absolutely going to use it and recommend its use far and wide. As far as I’m concerned, though, it’s a first step into a larger world.

Think about it: most of the browser development work these days seems to be going into JavaScript performance. Those engines are being overhauled and souped up and tuned and re-tuned to the point that performance is improving by orders of magnitude. Scanning the DOM tree and doing things to it, which used to be slow and difficult, is becoming lightning-fast and easy.

So why not write JS to implement multiple background-image support in all browsers? All that’s needed is to scan the CSS, find instances of multiple-image backgrounds, and then dynamically add divs, one per extra background image, to get the intended effect.

Just like that, you’ve used the browser’s JS to extend its CSS support. This approach advances standards support in browsers from the ground up, instead of waiting for the browser teams to do it for us.

I suspect that not quite everything in CSS3 will be amenable to this approach, but you might be surprised. Seems to me that you could do background sizing with some div-and-positioning tricks, and text-shadow could be supportable using a sIFR-like technique, though line breaks would be a bear to handle. RGBa and HSLa colors could be simulated with creative element reworking and opacity, and HSL itself could be (mostly?) supported in IE with HSL-to-RGB calculations. And so on.

There are two primary benefits here. The first is obvious: we can stop waiting around for browser makers to give us what we want, thanks to their efforts on JS engines, and start using the advanced CSS we’ve been hearing about for years. The second is that the process of finding out which parts of the spec work in the real world, and which fall down, will be greatly accelerated. If it turns out nobody uses (say) background-clip, even given its availability via a CSS/JS library, then that’s worth knowing.

What I wonder is whether the W3C could be convinced that two JavaScript libraries supporting a given CSS module would constitute “interoperable implementations”, and thus allow the specification to move forward on the process track. Or heck, what about considering a single library getting consistent support in two or more browsers as interoperable? There’s a chance here to jump-start the entire process, front to back.

It is true that browsers without JavaScript will not get the advanced CSS effects, but older browsers don’t get our current CSS, and we use it anyway. (Still older browsers don’t understand any CSS at all.) It’s the same problem we’ve always faced, and everyone will face it differently.

We don’t have to restrict this to CSS, either. As I showed with my href-anywhere demo, it’s possible to extend markup using JS. (No, not without breaking validation: you’d need a custom DTD for that. Hmmm.) So it would be possible to use JS to, say, add audio and video support to currently-available browsers, and even older browsers. All you’d have to do is convert the HTML5 element into HTML4 elements, dynamically writing out the needed attributes and so forth. It might not be a perfect 1:1 translation, but it would likely be serviceable—and would tear down some of the highest barriers to adoption.

There’s more to consider, as well: the ability to create our very own “standards”. Maybe you’ve always wanted a text-shake property, which jiggles the letters up and down randomly to look like the element got shaken up a bit. Call it -myCSS-text-shake or something else with a proper “vendor” prefix—we’re all vendors now, baby!—and go to town. Who knows? If a property or markup element or attribute suddenly takes off like wildfire, it might well make it into a specification. After all, the HTML 5 Working Group is now explicitly set up to prefer things which are implemented over things that are not. Perhaps the CSS Working Group would move in a similar direction, given a world where we were all experimenting with our own ideas and seeing the best ideas gain widespread adoption.

In the end, as I said in Chicago last week, the triumph of standards (specifically, the DOM standard) will permit us to push standards support forward now, and save some standards that are currently dying on the vine. All we have to do now is start pushing. Sizzle is a start. Who will take the next step, and the step after that?

]]>https://meyerweb.com/eric/thoughts/2008/10/22/javascript-will-save-us-all/feed/67Reserved ID Values?https://meyerweb.com/eric/thoughts/2005/08/29/reserved-id-values/
https://meyerweb.com/eric/thoughts/2005/08/29/reserved-id-values/#commentsMon, 29 Aug 2005 21:42:21 +0000http://meyerweb.com/eric/thoughts/2005/08/29/reserved-id-values/
As a followup to my entry about id="tags" causing problems in IE/Win, here are four five test pages for IE/Win:

These are based on Kevin Hamilton’s observation that it’s highly likely the problems are caused by the tags method in IE/Win’s document.all DOM interface. As he says:

[I]f you have an element with an id=”tags”, then document.all.tags is now a reference to that element, and no longer a method of the document.all object.

Such states would completely shatter any IE DOM scripting that relied on the document.all methods, and at least in the case of tags causes problems like crashing on print (probably because of the aforementioned conflict between the ID value and the DOM method). The other keywords of concern are chronicled in the test pages listed above. I’d test IE/Win myself, except I don’t have a printer handy for IE/Win to use, and besides, bug-hunting is best conducted in large groups.

Basically, load up each test page in IE/Win and do anything you can think to do. Try to print, view source, save a local copy, et cetera, et cetera—the more obscure and offbeat, the better. Let us know via the comments any problems you run into with said pages (trying to print them is a good first step, since that’s what messed up on tags) and I’ll add notes to each page based on what’s found.

In the meantime, I’m personally going to avoid using any of those words as ID values, and heartily recommend the same to you.

Update: I’ve added a test (for length) to the above list, and have another that’s not on the list due to its unfinished nature. It’s a test of id="all"; the problem is, I don’t really know how to test it, or if it’s likely to be a problem at all. Suggestions are welcomed in the comments. I added some JavaScript links to some of the test pages as a secondary test, but I’m not sure how much good they do, to be honest. As with suggestions, your feedback is welcome.

]]>https://meyerweb.com/eric/thoughts/2005/08/29/reserved-id-values/feed/40When Printing Killshttps://meyerweb.com/eric/thoughts/2005/08/26/when-printing-kills/
https://meyerweb.com/eric/thoughts/2005/08/26/when-printing-kills/#commentsSat, 27 Aug 2005 01:32:33 +0000http://meyerweb.com/eric/thoughts/2005/08/26/when-printing-kills/
Here’s a fascinating little tidbit: on some users’ machines, attempts to print out Joe Clark‘s ALA article “Facts and Opinions About PDF Accessibility” would crash Internet Explorer. The error message mentioned a script error in line 1401: “Object doesn’t support this property or method”. Funny thing: we weren’t doing any scripting. The error was actually occurring shdoclc.dll/preview.dlg, which is of course a piece of the operating system.

(And yes, we’re aware of the clamor for a print style sheet. More on this later.)

Update: Marten Veldthuis from Strongpsace points out that 37signals ran into a very similar problem in Backpack. Details can be found in Jamis Buck‘s June 3rd post ie-is-teh-3v1l. Spread the word: “tags” is effectively a reserved keyword, even though no such concept exists in (X)HTML. Use it at your (users’) peril.