So much of the internet is now made up of pages containing loads of images; just visit your favourite shopping site and scroll through a product listing page for an example of this.
As you can probably imagine, bringing in all of these images when the page loads can add unnecessary bloat, causing the user to download lots of data they may not see. It can also make the page slow to interact with, due to the page layout constantly changing as new images load in, causing the browser to reprocess the page.
One popular method to deal with this is to “Lazy Load” the images; that is, to only load the images just before the user will need to see them.
If this technique is applied to the “above the fold” content – i.e., the first average viewport-sized section of the page – then the user can get a significantly faster first view experience.
So everyone should always do this, right?
Before we get on to that, let’s look at how this is usually achieved. It’s so easy to find a suitable jQuery plugin or angularjs module that a simple install command later and you’re almost done; just add a new attribute to image tags or JavaScript method to process the images you want to delay loading for.
So surely this is a no-brainer?
Let’s look at what we’re actually trying to achieve here; display some images on a web page (achievable with html alone), but delay when they appear (needs more than just html).
The jquery or angularjs solutions have a dependency on JavaScript, jquery, and angularjs; what if the browser doesn’t support JavaScript? What if the user doesn’t want to download a bloating library or two or three when all you’re trying to achieve is an image load delay?
What if any number of browser toolbars, extensions, plugins, adverts, etc has a JavaScript error; now your user can’t see more than a page of images! Seems pretty daft, right?
Progressively Enhanced Lazy Loading Images
Given the potential limitations, let’s work on a solution that can handle all my concerns:
a. works without JavaScript (i.e., lazy loading is an enhancement)
b. vanilla js – no dependencies on jquery or angularjs
c. works with broken JavaScript (i.e., the browser supports JavaScript, but there’s a js error somewhere which causes your script to break; might not even be your fault!)
Approaching this logically, it makes sense to use a data attribute on an image element, and swap that for the src attribute when the element is getting close to the viewport. Something like:
<img
src="1x1.gif"
class="lazy"
data-src="real-image.jpg"
alt="Laziness"
width="300px" />
and then some JavaScript like:
var lazy = document.getElementsByClassName('lazy');
for(var i=0; i<lazy.length; i++){
lazy[i].src = lazy[i].getAttribute('data-src');
}
a) No JavaScript
Seems like a logical first step. So how could we change this to support no JavaScript? With a bit of html repetition perhaps:
<img
src="1x1.gif"
class="lazy"
data-src="real-image.jpg"
alt="Laziness"
width="300px" />
<noscript>
<img
src="real-image.jpg"
alt="Laziness"
width="300px" />
</noscript>
That would mean that the lazy loading would be ignored if JavaScript is disabled. I did a quick check on the network usage for code like this and can confirm that a basic noscript
img
check using the code above does not cause multiple requests! You’d assume not, but it’s worth checking!
b) no jQuery/angularjs
Using the html above, we can write the following JavaScript method to do the data-src
to src
switching:
function lazyLoad(){
var lazy =
document.getElementsByClassName('lazy');
for(var i=0; i<lazy.length; i++){
lazy[i].src =
lazy[i].getAttribute('data-src');
}
}
Then let’s create a simple event wiring up helper for cross-browser support (since we’re not using jQuery):
function registerListener(event, func) {
if (window.addEventListener) {
window.addEventListener(event, func)
} else {
window.attachEvent('on' + event, func)
}
}
And the register the lazyload
method to execute when the page loads.
registerListener('load', lazyLoad);
Now when the page loads we’re getting all images with the lazy
class and loading them using JavaScript; this certainly delays the loading, but it’s not intelligent.
Sounds like I need a bit of viewport logic. Something like this (as nicked from StackOverflow):
function isInViewport(el){
var rect = el.getBoundingClientRect();
return (
rect.bottom >= 0 &&
rect.right >= 0 &&
rect.top <= (
window.innerHeight ||
document.documentElement.clientHeight) &&
rect.left <= (
window.innerWidth ||
document.documentElement.clientWidth)
);
}
I’ll also need to add the viewport check:
function lazyLoad(){
var lazy =
document.getElementsByClassName('lazy');
for(var i=0; i<lazy.length; i++) {
if(isInViewport(lazy[i])){
lazy[i].src =
lazy[i].getAttribute('data-src');
}
}
}
and register the scroll
event:
registerListener('scroll', lazyLoad);
This is bad, mkay? You shouldn’t be changing the page whilst the user is scrolling. This is meant to be an example implementation of lazy loading; please feel free to improve it!
Now we’ve got a page that will only load the images within the viewport, and will load all images normally if JavaScript is disabled.
You can check it out here: http://codepen.io/rposbo/pen/xVddNr
Quick bit of refactoring
Before moving on to the “broken JavaScript” requirement, I want to tidy up the code a bit; right now it will check all lazy
images on every scroll event, even if they’ve already been loaded.
This isn’t a big deal for my demo, but it may be suboptimal for pages with more images. Plus it just feels messy! I want to remove images that have already been loaded from the lazy
array.
Firstly, let’s move the lazy
array to a shared variable and set it in a function that’s called on load:
var lazy = [];
function setLazy(){
lazy = document.getElementsByClassName('lazy');
}
registerListener('load', setLazy);
Ok, now we have all lazy images in that shared array but I need to keep it up to date. I’m going to remove the data-src
attribute once I’ve used it, then filter all lazy images:
function lazyLoad(){
for(var i=0; i<lazy.length; i++){
if(isInViewport(lazy[i])){
if (lazy[i].getAttribute('data-src')){
lazy[i].src =
lazy[i].getAttribute('data-src');
// remove the attribute
lazy[i].removeAttribute('data-src');
}
}
}
cleanLazy();
}
function cleanLazy(){
lazy =
Array.prototype.filter.call(
lazy,
function(l){
return l.getAttribute('data-src');
}
);
}
That feels better. Now the lazy
array will always contain only those images that have not been loaded yet. However it’s doing quite a lot during an onscroll
event, as mentioned before.
This version can be found at: http://codepen.io/rposbo/pen/ONmgVG
c) Broken JavaScript
I love this requirement; it’s a tricky one to solve. If the browser says it supports JavaScript, then the noscript
tags will be ignored. However, the browser may still fail to execute JavaScript for any of the reasons I mentioned at the start, or more.
How about this?
- Load enough images to fill the viewport un-lazily; i.e., just regular
img
tags with theirsrc
attributes set - Under those images have a link to a new page that is completely un-lazy – i.e., a whole page full of plain old
<img>
tags - Hide all
lazy
images using css - Use JavaScript to remove the link and remove the css that hides the lazy images
Let’s follow this through: if the page loads and JavaScript breaks, the user will see one screen of images (1) and a link to “view more” (2) which will take them to a full page (anchored to where they left off).
If the page loads and JavaScript is ok, the link will not be there (4) and the lazy load images will flow into view as intended (3).
Let’s try it out. You can use your own site’s analytics to see what the average user’s resolution is, and calculate how many items would fit in their initial viewport in order to decide where to put this “under the fold” link (2):
<div id="viewMore">
<a href="flatpage.html#more">View more</a>
</div>
Assume
flatpage.html
is just a non-lazy version of the same page, with an anchor element at the same point in the list of items.
Now let’s initially hide the lazy load images too (3). I’m surrounding them with a new element:
<span id="nextPage" class="hidden">
// all lazy load items go here
</span>
and the css for that class:
.hidden {display:none;}
This will capture those users with broken JavaScript by showing an initial viewport and a link to the full page. To re-enable the lazy load for users with working JavaScript, I’m just doing this in my setLazy
function (4):
// delete the view more link
document.getElementById('listing')
.removeChild(
document.getElementById('viewMore')
);
// display the lazy items
document.getElementById('nextPage')
.removeAttribute('class');
The resulting code looks like this:
Or play in the pen: http://codepen.io/rposbo/pen/EKmXvo
Summary
As you can see, it is certainly possible to achieve lazy loading images (and other content, should you want to) whilst still allowing for both broken JavaScript and a complete lack of JavaScript support.
There’s a github repo to show the difference between the main listing page and the “flat” listing page as a more “realistic” implementation: https://github.com/rposbo/lazyloadimages
This repo shows how you might implement the solution in .Net, passing the same dynamically generated collection of items to both listing pages.
thanks for sharing….clear explanation
Just a small note. For the use of addEventListener, and providing a fallback to attachEvent for IE – please be aware that only IE8 and below will need this. I’d argue that if you are building a site today, you don’t need to support IE8 at all. If you have legacy support for IE8 (e.g. enterprise software), if you haven’t already put out the End Of Life notification to your customers, you need to do so ASAP.
PS Microsoft End Of Life’d IE8, IE9, IE10 as of January 12, 2016: https://www.microsoft.com/en-ca/WindowsForBusiness/End-of-IE-support
Good point, thanks! That reduces the solution’s codebase even more 😉
Worth adding to this point, that the attempt to support IE <9 using attachEvent is also redundant because of the use of getElementsByClassName which is only supported in IE9+ anyway.
I didn’t realise that – thanks!
IE is dead already 😀
http://thenextweb.com/microsoft/2016/01/05/web-developers-rejoice-internet-explorer-8-9-and-10-die-on-tuesday/#gref
Hey,
Really great post here! I loved the examples – lazy loading is in my backlog to implement, so I’m going to be trying a lot of ideas from this post.
Thanks!
Great tutorial on the lazyloading! My big question is how do I combine this technique with srcset or responsive images?
I think that since srcset / picture element are not dependent on javascript, then you could implement each img as a picture element or a srcset img with pretty much the same functionality; it’d become really verbose, but should work. Have a go, let me know if it works!
Your solution is good. Thank you 🙂
My gut reaction to the broken JS case is to ship data:// URIs with small thumbnails (20×20) instead of a 1×1 placeholder. While increasing load times slightly, it leads to less vertical movement on the page since the aspect ratio of the thumbnail will be pretty close to that of the actual image.
Interesting; I prefer to use a 1×1 px image (since it’ll be cached after the very first request, resulting in no further requests throughout the page for that image), and explicitly set the image dimensions (height and width) in the img attributes, so the browser can have a predicable layout as no images are changing sizes.
Why not have javascript pull the image out of the noscript tag? Then you wouldn’t need to have two tags.
Hi there. I find this lazyloading solution too simple, and it creates some perf issues. This is why I would recommend a few changes:
1. Images are only loaded after the window “load” event. But the load event is not reliable and is sometimes triggered very very lately. The best libraries use the ‘s own “load” event to initialize its lazyloading.
2. The scroll event listener should be debounced to ensure a smooth scroll (especially on smartphones and MacOS X where the scroll event can be triggered hundreds of times per second).
3. It requires to download an image (http://spacergif.org/spacer.gif in the example), which makes one additional request. You’d rather use a base64 image.
Thanks for the comments, glad to see someone else thinking about this! For sure, trigger any layout events during “onscroll” is bad, as I mentioned near the start.
As for the datauri – it’s an interesting point, but I prefer the actual image reference since it will be cached after the first request, so will only be requested once; whereas the datauri will not be “cached”, as it’s embedded. Not a huge issue since the smallest valid datauri image is teeeeeny, but I liked the idea of loading the placeholder image once for all lazy images
> what if the browser doesn’t support JavaScript?
Nothing, I do not expect that someone doesn’t have JS enabled in 2016.
The same story about broken JS. If you have small script with lazy loader alone and it is broken because of something external – pretty much every site will be broken for user, hence, there is no point to do something about this specific issue either.
The last thing is about code itself. Calling `getBoundingClientRect` in `scroll` event is painfully expensive.
ya, I did say that doing stuff in “onscroll” is bad.
I, along with many other people I know, will have JS disabled by default. As well as the fact that often, especially people working in a terminal, don’t have access to a browser that supports JS.
Estimates I’ve seen put No-JS share at between 3% and 5%. That is more than IE11 which is <3%. If you are building sites that work in IE11, you should be building sites that are usable without JS.
Interesting stats, thanks for the comment!
Check this lazy load hack: http://codepen.io/marvin52/post/hack-lazy-load
That’s an interesting approach, thanks for sharing
Have you considered using the noscript tag alone then converting it using JS upon demand?
No code repitition, like so:
https://websemantics.uk/articles/idle-load/
Nice!
I am missing something?
If you don’t have JS enabled, you will load and view two images for each item i the list. One transparent gif (300×300 px) in the img tag, and one “real” image (300×300 px) in the noscript img tag.
The extra 300x300px transparent img will mess up the page layout – right?
This should be fixed by adding something like this, somewhere on the page:
img.lazy { display:none; }
…. yeah… but … um… whoops! Good spot, how did I not notice that?!
Hi Robin. I feel like you’d be interested in my progressively-enhanced lazyload library: http://tylerdeitz.co/lazy-progressive-enhancement/
It works by generating an img element to be lazyloaded, cloning the content of selected noscript elements.
So
Becomes
hey, that’s really interesting! Thanks for sharing
Nice introduction to the topic, reveals that what many people think as of library magic is in the end dead simple thing 🙂
On the other hand, example is quite naive and performance of such a solution would be far from excellent – hooking on scroll event without some throttling / debouncing is anti-pattern. Event is emitted every ~ 10ms, what is too often for this purpose and on real world example (page of size which really need to lazy load images) would likely cause lags during scrolling. Second call to getBoundingClientRect on every image is forcing browser to layout the page and as its nested inside loops (scroll event & looping over images) it will likely lead to blocking rendering and FPS drop during scrolling.
Avoiding this is simple – calculate image positions just on resize (again throttle / debounce event) or other event which leads to images reflow. No reason to do this on scroll. Delay loading to end of scrolling – anyway during fast scroll its likely that user will scroll images out of view before they actually load.
As last consider preloading more images, than just visible portion to prevent bad UX if network is slower.
Good to see you’ve given improvements some thought! Yeah, as I mentioned at the start, hooking into “onscroll” is bad.
var lazy =
document.getElementsByClassName(‘lazy’);
for(var i=0; i<lazy.length; i++){
lazy[i].getAttribute('data-src', lazy[i].src);
lazy[i].src = "blank.png";
}
…
So if js is disabled images are normal.
If js is enabled – lazyLoad works normally..
Hi!
If you want to do that, you need to execute that JavaScript after the browser has parsed/rendered all the html, so it will be already downloading all the images… changing the src at that point won’t prevent the requests nor the downloads, so the solution is kind of pointless.
It will look like you’re lazy loading the images but under the hood they’ll all download at the beginning.
It’s better to not do this.
As Ignacio says, your browser will attempt to preload any img’s src way before any js gets a chance to kick in. Swapping them out with js will be mainly too late.
Beautiful explanation and code example. The best I found on internet after opening like 40-50 webpages.
Thank you.
cool solution !!!
The only issue with this is your essentially doubling the markup text output.. and if you have a large ecommerce website showing hundreds of products (images) on the same page, this can grow fast.
This is good for smaller pages and “landing pages.” When you run a site which relies on JavaScript to convert a sale, then use your better judgement and decide if this is really a concern or not.
Doubling markup that’s gzipped by your server, and possibly even html minified ( http://robinosborne.co.uk/2016/05/30/the-last-frontier-of-minification-html/ ) is still, even for an “enterprise” scale site, going to be more efficient for both the end user and the site owner than wasting bandwidth with unnecessary images being loaded (i.e., images the user doesn’t see, since they don’t scroll down far enough).
Although, if you’ve got to the point where you’re thinking about limiting how much html you’re sending over the wire, you must have already optimized every other aspect of the site – in which case, excellent work!
Gzipping does help, but I guess I’d argue that expecting a browser and client to support JavaScript is “OK” at this point on the web, wouldn’t you agree?
Sure, and this is why I feel the 3rd point in the article is the most important; dealing with broken js. Your browser says js is fine, but for one of any number of reasons the js fails to load or fire. You still should try to deliver a functional (though not necessarily so enhanced) experience to your end user IMO
You should always consider that Javascript may no load, either because the network timed out, something blocked JS rendering, the JS payload is too large, the user tries to access your site using a proxy browser like Opera Mini or they may be coming to your site from a device that can’t handle your site’s Javascript payload.
So even if it’s ok, JS may still not be OK
It’s cool, but usually I think most businesses should settle for good old fashioned prioritising of content and breaking your message over a few pages. One of my large friction points with clients is when they want to turn their digital marketing into a magic-act and have things appear and disappear, fly in from all over the place, flashing, blinking and late loading etc. It’s all a little homer simpson for me.
Sure, but how would you ask the business to prioritise content in a product listing across multiple pages, when all they want is a smooth, continuous, product list stream?
Great article and simple code example, thank you!
I have one remark: better use clear function without side effect.
Hello, the initial script will check all lazy images on every scroll event as you mentioned, even if they’ve already been loaded. Can we instead of using the additional code just remove the “lazy” class from the img.
if(isInViewport(lazy[i])){
lazy[i].src = lazy[i].getAttribute(‘data-src’);
lazy[i].removeClass(‘lazy’);
}
Thanks
You could, but it wouldn’t update the array; this is what the “cleanLazy” function is for.
You lost me at “What if the browser doesn’t support JavaScript.” What are you, Pepperidge Farms?
Yeah, it’s a low percentage. Even many screen-readers can handle most js, and search bots. So if you’re happy that your audience will only ever consist of users on a js-capable device it’s not terrible if you don’t code for that scenario.
Broken js on the other hand, is quite common.
I would like to know if the data attribute fails to load e.g in
if the data-src link is broken is there any fallback or default image? instead of showing blank or broken image ?
is can we add onerror function as a fallback if data-src is failed to load?
Thank you very much bro! Its work perfectly! Regret from Indonesia!
Thanks for information brother
An excellent article.
However I would argue this is a browser feature that is required rather than a website feature.
The web server should allow its content to be loaded efficiently, but shouldn’t be expected to provide the code to do it as well!
If the browser only loaded the images that were in the viewport, there would be no need for this work around that adds additional HTML, images, JS, CSS and round trips.
I frequently find that by the time one adds all these extra resources with the slowdown this causes, the benefit is largely lost. Afterall, in most cases this only affects the first page load on the site.
I agree your solution is fairly light, but the markup for every page is heavier. I agree the 1×1.gif is better than hardcoding base64 providing the filename is shorter than the base64!
Since the browsers do not offer such a feature, on very image heavy sites the benefits of this code would be apparent, and for that reason it is a useful resource.
Thanks.
Any idea if facebook will be able to find the image, either via dynamic loading or via , to display when a link to the lazy loading site is shared?
Thanks Robin,
Was looking at increasing load times on my project, partly due to a multitude of pictures. Went looking for a bullet proof solution, found this blogpost. It was a nice and effective lifesaver!
Regards,
Barry
Thank you very much for sharing such information with us.
Thank you very much bro! Its work perfectly! Regret from Indonesia!
Thanks. That’s what I needed.
Amazing – 6 years later, this post still holds up. I just finished a php implementation following these design patterns, and it works great. Kudos to you Robin, thanks for taking the time to document all this – I’m very grateful, even if the codepen is totally busted, lol!
Great to hear! Thanks for letting me know the codepens are broken – I’ll add this to my ever-growing todo list.. 🙂