Friday, 14 June 2013 at 10:00 am

Any developer that deals with a lot of javascript and CSS understands the pain involved in positioning an element exactly right on a page and also have it respond well to window resizes.

A common frustration is creating a "drop-up" menu where a selection menu is displayed above a button on user click. The height of the menu needs to be known to correctly position it above the button. If the menu is dynamically generated, then the height will possibly change everytime the user clicks the button.

The height of the menu can be obtained by getting the element's offset properties (offsetHeight, offsetWidth...). However these properties are only available if the element can be read by the browser screen reader. 

A common technique to hide an element while still making it available to the screen reader is to simply move it off the screen: = -10000;

Versus the CSS methodwhich will not make it available to the screen reader. = 'none';

A good overview of various DOM hiding methods can be found here.

I don't really like the offscreen method as it is kind of a hack and I can envision a future scenario where new browser versions will ruin web apps that uses this method. However, this method seems to be the only effective way around this problem. 

Is there any performance difference between offscreen and CSS methods? I setup a jsperf test to find out:

I created a new wrapped DOM class and prototyped hide() and offscreen() functions for the CSS and offscreen methods respectively. Then I created 100 wrapped DOM elements as the test setup. 

For the two test cases, I either used hide() or offscreen() on the 100 DOM elements. The offscreen() method performs slower than the hide() method by almost 74%. I expected it to be slower as offscreen elements are still being read by the screen reader, but 74% is a pretty large performance hit considering most modern web apps can easily have a few hundred elements.