David Walsh Bloghttps://davidwalsh.name
A blog featuring tutorials about JavaScript, HTML5, AJAX, PHP, CSS, WordPress, and everything else development.Fri, 09 Dec 2016 01:33:25 +0000en-UShourly1https://wordpress.org/?v=4.6.1Sorting Strings with Accented Charactershttps://davidwalsh.name/sorting-strings-accented-characters
https://davidwalsh.name/sorting-strings-accented-characters#commentsThu, 08 Dec 2016 22:35:25 +0000https://davidwalsh.name/?p=25982Strings can create a whole host of problems within any programming language. Whether it’s a simple string, a string containing emojis, html entities, and even accented characters, if we don’t scrub data or make the right string handling choices, we can be in a world of hurt. While looking through Joel Lovera’s JSTips repo, I spotted a […]

]]>Strings can create a whole host of problems within any programming language. Whether it’s a simple string, a string containing emojis, html entities, and even accented characters, if we don’t scrub data or make the right string handling choices, we can be in a world of hurt.

While looking through Joel Lovera’s JSTips repo, I spotted a string case that I hadn’t run into yet (…I probably have but didn’t notice it): sorting accented characters to get the desired outcome. The truth is that accented characters are handled a bit differently than you’d think during a sort:

Yikes — accented characters don’t simply follow their unaccented character counterparts. By taking an extra step, i.e. localeCompare, we can ensure that our strings are sorted in the way we likely wanted in the first place:

]]>https://davidwalsh.name/sorting-strings-accented-characters/feed3Cloudinary Improves User Experience through Intelligent Delivery (Sponsored)https://davidwalsh.name/cloudinary-improves-user-experience-intelligent-delivery
Wed, 30 Nov 2016 11:24:37 +0000https://davidwalsh.name/?p=25971Over the past months I’ve been showing you techniques for optimizing your site media with Cloudinary. Cloudinary has you covered with APIs in just about every programming language, a host of useful add-ons, the ability to modify images on the fly by modifying URLs, implementation of bleeding edge browser APIs, and a whole lot more. Hosting and serving media with Cloudinary […]

Cloudinary has just announced that they now support “multi-CDN”, meaning you no longer need to select the best CDN, handle the technical integration, fit the CDN to your needs — Cloudinary does it all for you! Each of Cloudinary’s supported CDNs has different advantages and special features; Cloudinary maps your requirements to the best matching CDN. CDN variables and features can include:

CDNs are incredibly important when creating lightning fast, always available websites. Numbers show that the longer your site takes to load, the more likely your users are to bail. Images account for approximately 60% of downloaded content, and if 60% of your assets are slow to download, you’re losing visitors and revenue. CDNs serve your assets not just on lightning fast servers but servers located throughout the world, so visitors get the fastest possible download based on their location.

Leading media websites like Twitter, Facebook, and Netflix have adopted the multi-CDN approach to more efficiently serve media throughout the world — Cloudinary can now do that for you too!

Cloudinary’s service keeps improving by providing its users flexibility along with cutting edge technology and strategies. What I love most about what Cloudinary does is they seem to solve everything, and now they’ve solved CDN with the new multi-CDN feature. Every media storage, delivery, and transformation feature I can think of is supported by Cloudinary. Give Cloudinary a try so you can see everything they have to offer!

]]>CodeMirror: Set Focus at End of Linehttps://davidwalsh.name/codemirror-set-focus-line
https://davidwalsh.name/codemirror-set-focus-line#respondSat, 26 Nov 2016 03:47:13 +0000https://davidwalsh.name/?p=25977CodeMirror is a WYSIWYG-like editor that allows for rich text editing on a small scale, oftentimes used to allow Markdown editing, much like ReviewBoard uses it for. One problem I’ve found, however, is that calling a CodeMirror instance’s focus method put the cursor at the beginning of the input, which is annoying when there is input in the field. In […]

]]>CodeMirror is a WYSIWYG-like editor that allows for rich text editing on a small scale, oftentimes used to allow Markdown editing, much like ReviewBoard uses it for. One problem I’ve found, however, is that calling a CodeMirror instance’s focus method put the cursor at the beginning of the input, which is annoying when there is input in the field. In theory you’d always want to put the cursor at the end so that the user can continue adding to the text that’s already there.

Here’s a snippet that will set the cursor to the end of existing input:

cmInstance.focus();
// Set the cursor at the end of existing content
cmInstance.setCursor(cmInstance.lineCount(), 0);

You would think that there would be a method which would accomplish this task, or even have focus set the cursor to the end of input by default if the instance has existing text. Anyways, this is the code that will put the cursor at the end of your CodeMirror input instance!

]]>https://davidwalsh.name/codemirror-set-focus-line/feed02016’s Best Web Tools & Services (Sponsored)https://davidwalsh.name/black-friday-2016
Fri, 25 Nov 2016 08:30:06 +0000https://davidwalsh.name/?p=25969Yes, Black Friday is here, the waiting is over! We are so happy to announce you the best showcase of web tools and services we’ve ever made. You will find the best WordPress themes, website builders, social media buttons, but also cross-browser testing services, screenshot app and many more awesome products. Test each one and […]

]]>Yes, Black Friday is here, the waiting is over! We are so happy to announce you the best showcase of web tools and services we’ve ever made. You will find the best WordPress themes, website builders, social media buttons, but also cross-browser testing services, screenshot app and many more awesome products. Test each one and see their great features!

BeTheme is more than just a multi-purpose, premium responsive WordPress theme, but an excellent solution packed with 200 shortcodes with a full guide, a powerful builder that helps you customize everything and add whatever you need, and 230 included pre-built websites that be installed with just one click. It’s as simple as it sounds!

This advanced theme can be used for all your projects, building an infinite number of unique, brilliant websites. Using the powerful admin panel, you can easily customize the website, without the need of any coding skills or experience. If you need a large menu, they included also a built-in mega menu. With BeTheme, your websites can have the parallax effect and be fully responsive and retina ready. All of these are just a small of what BeTheme can do, you should check the rest. They sold more almost 50,000 BeTheme templates just on ThemeForest, everybody knows this is the perfect choice.

Whenever you feel blocked, you cannot find something or you have an issue, their support is outstanding, one of the best on the market. These guys are always ready to help you in a professional way.

Whenever you’re launching a new web service or a web site, you should cross-browser test it before putting it online. Every browser (Chrome, Firefox, IE, etc.) is a little bit different and every platform (Windows, Android, OSX, etc.) is a little bit different. If you don’t perform testing on all (or at least most popular) browsers and platforms, you will lose visitors, customers and subscribers because your webapp might not work in some of browsers.

Browserling is the excellent solution for cross-browser testing. It lets you do quick browser testing on all Windows platforms, all Internet Explorer browsers, Androids and MacOS. Linux support is coming soon as well. They’re offering live interactive sessions using real browsers that you get access to in less than 5 seconds. All browsers run on real computers and there are no emulators. Browserling is packed with a lot of features and they just keep adding more awesome stuff.

Browser bookmarks for even quicker testing – just bookmark the browser and access it with single click.

Recording browsers to videos and gifs – coming soon!

They have a free plan that offers 3 minutes sessions for quick testing and a developer plan for a price of pizza. Only today using the Black Friday BFLING2016 coupon code, you will get 33% off. Don’t miss this awesome offer!

OptinMonster is the excellent solution to easily build forms that will convert abandoning visitors into buyers and subscribers. Nobody should let their visitors go away, after paying to advertise and generating content to attract people. Now, we have OptinMonster, a brilliant tool that helps us building forms and use them at the right time, in the right place, to the right people. There is no need of having a developer, as there is no need of coding skills or experience to have well designed, beautiful and high converting forms.

OptinMonster is packed with a real high number of features that you should check. Their first plan starts from just $9 / month, billed annually. Give it a try!

Snapito is a free and fast way to get website screenshots and modify them with the built-in editor that works brilliant and is packed with many features. These screenshots have a high quality and it can be used in many projects. You don’t need to install a browser extension or subscribe to a service, it’s the fastest way of getting a picture of your website. They are working hard to launch the LIVE API services in the first part of 2017.

As today is Black Friday and Cyber Monday is in 3 days, Themify,the most popular WordPress theme, has awesome offers for all of us. They prepared 3 things. First is a coupon named BLACKFRIDAY, which gives you 50% discount to all WordPress themes, plugins, and Club Memberships, except the Lifetime Master Club. The second coupon is BLACKFRIDAYLIFE, this one is a discount of $150 for the Lifetime Master club membership. With this discount, the cost is just $249 and you will get lifetime access to all their products, support, and updates.

And this is not all, Themify has a giveaway of 10 Master Club memberships. You will get access to all their products, including themes, plugins, add-ons and even PSD files. These awesome deals are available from today and end on Cyber Monday, in 3 days. Get these awesome deals!

uCoz is a modern website builder that lets you easily create your unique online home, for free. The system offers a wide range of modules, features, and responsive templates, making it possible to build any kind of website: an online shop, forum, blog, membership site, and more. Full access to HTML and CSS allows fine-tuning every detail of your site to make it look exactly how you want it.

Do you need a fully responsive and functional website? Start using Simbla website builder website builder and you can have all of that. They have many beautiful templates that will suit your needs and they also offer a free plan with hosting included. Use Simbla and get an amazing website!

There are many benefits when using a time tracking software and especially actiTIME, a company with great experience. You will improve your team performance, tune every process and put efficiency on the focus, lower your cost because you are more productive and finally but not least, make your business stronger. Start using actiTIME and benefit of this useful software!

We can describe Simunity icon maker in just a couple of words like Free Icon Builder, Free Responsive HTML Templates and Free Images. It’s awesome to have many high-quality products at our disposal, for free. Your life can be much easier, check their products!

Attract visitors to your website with the help of the social media tools by uSocial: Share and Like buttons, a META data builder, and a customizable window that invites your site visitors to subscribe to your social accounts. The uSocial dashboard will give you access to the statistics on impressions and shares as well as allow building the unlimited number of button sets for your websites and editing them at any time.

wpDataTables is the most popular solution when talking about easily creating charts, graphs, and tables. This premium WordPress plugin has a lot of useful features and costs just $35. All the features can be found in the complete documentation and in their videos. You can easily find and understand how it works. Try wpDataTables!

With TheSquid.Ink, you can get 2000 awesome, high-quality line icons with just $45. 2000 icons mean really a lot, you can use them for an unlimited number of projects. If want free packages, they also have it. It contains 50 icons, beautiful designed. Go for it!

Xfive it’s a team of talented, professional guys who can help you with back and front-end development, WordPress development and many more things. They are working with small companies, but also huge brands like Ebay and Microsoft. Discuss with them and start your new project!

Using InvoiceNinja you will get the best features and invoices on the market. It’s really easy to create invoices but also to use all of their features. They have also a free plan and the next one start from just $8 /month. Give it a try!

There are so many solutions on the market that it can become confusing to select the best one. Some of them have years on the market and are well known as the best, these are the ones that we’ve reviewed here.

]]>Improve User Performance with Pulse Insights (Sponsored)https://davidwalsh.name/improve-user-performance-pulse-insights
Tue, 22 Nov 2016 19:48:30 +0000https://davidwalsh.name/?p=25965When I was learning web development, optimizing our websites to be less than 200kb was the standard my fellow students and I had to achieve. Internet speed wasn’t as fast and large websites created a very poor user experience. To accomplish this, we would go to each page, open the developer tools (firebug was the […]

]]>When I was learning web development, optimizing our websites to be less than 200kb was the standard my fellow students and I had to achieve. Internet speed wasn’t as fast and large websites created a very poor user experience. To accomplish this, we would go to each page, open the developer tools (firebug was the tool of choice at the time) and then measure the size of that page.

If we found a page was too large, we would then compress images if we missed any the first time around, minify the code and do everything we could to be under that magic number. This was a lot of repetitive mind-numbing work but the reason it was so important back then is the same as it is now. The smaller your website is, the quicker it will load, the better experience your users will have.

Lucky for us, internet connections have become a lot faster but our sites have also become much larger. A common website is now a few megabytes in size and end users now more than ever – want to view your website content the moment they click that link.

And so I have a question for you. How large is the average page on your site? How long does it take for the average user to load a page on your website? And when was the last time you navigated your website page by page to ensuring that your images are compressed and code is minified?

A handful might say a few days ago, but the rest of us will need to go back and look our calendar. It could have been a while ago.

Now I’m not trying to be lazy, but these days I sadly don’t have the time to go page by page and manually test these things. I have new designs I need to work on, current features which need improving, bugs which need fixing and a to-do list that never shortens.

How can I ensure that each of our pages is optimized, but still maintain some form of sanity?

Today I’m going to introduce you to a tool called Pulse Real User Monitoring, which includes a powerful feature called Pulse Insights. Pulse Insights was specifically built by Raygun to save developers hours of manual labor testing so that you can focus on all those other tasks.

What is Pulse Real User Monitoring?

Pulse Real User Monitoring measures the end user experience of the people actually visiting and spending time on your website. This information is served in real time, which is invaluable for gaining visibility on all sorts of actions that would otherwise remain unseen. An example is that you can see how a recent deployment affected end users and which pages are giving you the largest load times.

Pulse Real User Monitoring tracks each individual end user’s session and measures the performance of pages and assets for that particular session.

If you would like to know more about Pulse Real User Monitoring and what it can do for your projects, then you can read a previous post we wrote here. It’s the introduction of Pulse Insights, however, that makes Raygun’s version truly powerful.

What is Pulse Insights?

Using the data provided from Pulse Real User Monitoring, Pulse Insights automatically crawls your website every week, then provides you with actionable tasks for you to implement which will improve the overall performance of your website. This eliminates the need for you to manually check your website page by page for performance issues.

After loading the page, the results are validated against a series of 22 rules which are known to show performance benefits if implemented.

For example, Pulse Insights ensures that your images are compressed, to minify your code, and it even goes beyond into using the async or defer tag on script elements.

The main benefits that Pulse Insights gives to your software team are:

Instant knowledge of performance improvements

Find out why a rule is important

Improve the speed of your most viewed pages

Get notified of performance issues directly to your ChatOps software

Get weekly reliable scans

Instant knowledge of the performance improvements you can make

As we know development time is always in high demand, so the first view you see in Pulse Insights is a list of all the rules which are currently failing on your website.

This is my favorite view because using this page, you can choose which rule to fix based upon the number of pages affected, issues it has caused and the difficulty of the rule to implement. This means you can get the most bang for your buck.

Find out why that rule matters and how you can resolve it

Each rule has its own detailed page to provide you more information.

Description of the issue gives you insights as to when the rule was triggered, stopping you from having to endlessly search the web. Resources which caused a rule to fail are displayed in the dashboard, allowing you to quickly find out why a particular rule is failing.

Improve the speed of your most viewed pages

Every page is not equal in importance and whilst you do need pages to load quickly, times will come when you need to optimize particular pages. Landing and signup pages are a good example of pages that may take priority in your business.

This is where the “Pages” view in Pulse Insights comes into play. You can search for a particular page on your website (based on score, views, or name) and then view the individual rules affecting that page:

This enables you to boost the performance on pages which matter the most to you and then see how the rules perform over time.

Get notified of performance issues directly to your ChatOps software

Pulse Real User Monitoring also integrates with ChatOps software Slack and HipChat, so your team can be alerted to performance issues in real time. This way, you can triage performance issues and prevent wasting time on minor fixes:

Get weekly reliable scans

Pulse Insights regularly scans your website and delivers reliable weekly reports on exactly where your team should be focusing efforts when it comes to performance. Having this peace of mind that you don’t need to scan your website manually is a big timesaver. The screenshot below shows a typical report showing an overview of your website’s performance from Pulse Insights:

Here’s how to get started with Pulse Real User Monitoring and Pulse Insights:

Getting started

Firstly, take a free 30 day trial of Pulse Real User Monitoring (for web and mobile applications). You will get all the benefits of Pulse Insights.

Pulse Real User Monitoring is easy to install, with just a few lines of code to add to your application. Below is a detailed walkthrough:

Enable Pulse Real User Monitoring

Then add the above script just above the closing </body> tag. You can find an apiKey after signing up and create an application for Pulse.

Deploy & Send Data

When you have deployed these updates your end users will start sending data to Pulse Real User Monitoring. You’ll then be able to see the performance breakdown information for each page end users visit on your website.

As you can see, getting set up and sending data is easy.

Final thoughts

Gone are the days where websites ought to be under 200kb in order for them to load fast. No longer do you need to manually test every page by hand.

If you are wanting to deliver your content to your end users as quickly as possible and find yourself unable to answer the questions:

“how long does it take for the average user to load a page on your website?”

“why is our website so slow?”

Pulse Real User Monitoring and Pulse Insights will give you the data you need to identify issues quickly well before they affect your end users. Pulse Real User Monitoring was also built to compliment Raygun Crash Reporting. Error reports can be matched against performance reports to find out exactly what happened, where and to whom.

Special offer

Pulse Real User Monitoring gives you even more power to find and fix performance issues in your web applications well before they affect your end users. It can be used independently or alongside Raygun’s error tracking software for even more insight into problems affecting your users.

Raygun are offering one month free of Pulse Real User Monitoring for all David Walsh Blog readers!

Simply take a free 30 day trial here, and click on the banner below for a special deal!

]]>Six More Tiny But Awesome ES6 Featureshttps://davidwalsh.name/es6-features-ii
https://davidwalsh.name/es6-features-ii#commentsTue, 22 Nov 2016 15:40:30 +0000https://davidwalsh.name/?p=25964ES6 has brought JavaScript developers a huge new set of features and syntax updates to be excited about. Some of those language updates are quite large but some of them are small updates you would miss if you weren’t careful — that’s why I wrote about Six Tiny But Awesome ES6 Features, a list of the little things […]

]]>ES6 has brought JavaScript developers a huge new set of features and syntax updates to be excited about. Some of those language updates are quite large but some of them are small updates you would miss if you weren’t careful — that’s why I wrote about Six Tiny But Awesome ES6 Features, a list of the little things that can make a big difference when you code for today’s browsers. I wanted to share with you six more gems that you can start using to reduce code and maximize efficiency.

1. Object Shorthand

A new object creation shorthand syntax allows developers to create key => value objects without defining the key: the var name becomes the key and the var’s value becomes the new object’s value:

I can’t tell you the number of times I’ve manually coded key => value properties in this exact same way — now we simply have a shorter method of completing that task.

2. Method Properties

When it comes to these ES6 tips, it seems like I obsess over just avoiding adding the function keyword…and I guess this tip is no different. In any case, we can shorten object function declarations a la:

If you declare a function within the block, it will leak out, but if you keep to let, you’ve essentially created an IEF without the parens.

4. for loops and let

Because of variable hoisting within JavaScript, oftentimes we would either declare “useless” iterator variables at the top of blocks, code for(var x =..., or worst of all forget to do either of those and thus leak a global…just to iterate through a damn iterable. ES6 fixes this annoyance, allowing us to use let as the cure:

The best part is the new ability to create getters and setters for properties! No need to create special setting via functions — these automatically execute when they’re set via basic obj.prop = {value}.

6. startsWith, endsWith, and includes

We’ve been coding our own basic String functions for way too long — I remember doing so in the early MooTools days. The startsWith, endsWith, and includes String functions are examples of such functions:

Seeing comon sense functions make their way to a language is incredibly satisfying;

ES6 has been an incredible leap forward for JavaScript. The tips I’ve pointed out in this post and the previoius go to show that even the smallest of ES6 updates can make a big difference for maintainability. I can’t wait to see what the next round of JavaScript updates provides us!

]]>https://davidwalsh.name/es6-features-ii/feed6Convert webm to mp4https://davidwalsh.name/convert-webm-mp4
https://davidwalsh.name/convert-webm-mp4#commentsTue, 22 Nov 2016 15:33:43 +0000https://davidwalsh.name/?p=25970There’s an upcoming Mozilla trip to Hawaii on the cards and, in trying to be a good family man, I’m bringing my wife and two young sons. Glutton for punishment? Probably. Anyways, I’ve been feverishly downloading cartoons from YouTube using youtube-dl to put onto our new iPad to keep my oldest son occupied during the long flight. It seems as though […]

]]>There’s an upcoming Mozilla trip to Hawaii on the cards and, in trying to be a good family man, I’m bringing my wife and two young sons. Glutton for punishment? Probably. Anyways, I’ve been feverishly downloading cartoons from YouTube using youtube-dl to put onto our new iPad to keep my oldest son occupied during the long flight.

It seems as though YouTube preserves the original video format, since sometimes youtube-dl provides a webm and other times a mp4. Since mp3s are iOS’ best friend, I’ve needed to convert the webm files to mp4. The following ffmpeg command will make that happen:

ffmpeg -i "Spider-Man.webm" -qscale 0 "Spider-Man.mp4"

The qscale 0 directive instructs ffmpeg not to adjust quality of the video during conversion. And with this conversion I can now properly transfer the videos onto my iPad.

Check out my media tutorials for more magic when it comes to video, audio, and images. You’re sure to learn a few tricks along the way.

]]>https://davidwalsh.name/convert-webm-mp4/feed2Update jQuery UI Widget Optionshttps://davidwalsh.name/update-jquery-ui-options
https://davidwalsh.name/update-jquery-ui-options#respondMon, 21 Nov 2016 14:22:31 +0000https://davidwalsh.name/?p=25967We’re all used to passing options when instantiating an object, whether it be JavaScript or any other language. Whether or not you can update those options later is usually up to the framework, and somehow many wont let you update them once they’ve been passed in. Depending on how the initialization of the object is done, sometimes […]

]]>We’re all used to passing options when instantiating an object, whether it be JavaScript or any other language. Whether or not you can update those options later is usually up to the framework, and somehow many wont let you update them once they’ve been passed in. Depending on how the initialization of the object is done, sometimes that makes sense, but in most cases you should be able to update an option at any given time.

I recently needed to update a jQuery UI widget option and here’s how you update any given option:

this.$editor.inlineEditor('option', 'forceOpen', true);

jQuery UI is mostly a legacy technology these days so I’m mostly passing this tip on for those having to maintain old code. This does teach a good lesson: always provide a method for modifying initial options, even if you don’t foresee a reason to do so!

]]>https://davidwalsh.name/update-jquery-ui-options/feed0Require Parameters for JavaScript Functionshttps://davidwalsh.name/javascript-function-parameters
https://davidwalsh.name/javascript-function-parameters#commentsTue, 15 Nov 2016 21:22:35 +0000https://davidwalsh.name/?p=25962JavaScript is notorious for being “loose”, something that some developers love but other developers loathe. I hear most of those complaints from server side developers, who want string typing and syntax. While I like strict coding standards, I also like that JavaScript lets me quickly prototype without having to cross the I’s and dot the T’s. Until […]

]]>JavaScript is notorious for being “loose”, something that some developers love but other developers loathe. I hear most of those complaints from server side developers, who want string typing and syntax. While I like strict coding standards, I also like that JavaScript lets me quickly prototype without having to cross the I’s and dot the T’s. Until recently you couldn’t define default parameter values for functions in JavaScript, but now you can!

When I posted last week about Six Tiny but Awesome ES6 Features, an awesome reader (cmwd) pointed out that you can not only set default function parameter values but you can throw errors when a given parameter isn’t provided to a function:

const isRequired = () => { throw new Error('param is required'); };
const hello = (name = isRequired()) => { console.log(`hello ${name}`) };
// This will throw an error because no name is provided
hello();
// This will also throw an error
hello(undefined);
// These are good!
hello(null);
hello('David');

I love this tip — it shows how with each addition to JavaScript we can stretch the language to do interesting things. How practical it is to throw errors in production is up to you but this is an awesome ability during development. Happy coding!

]]>https://davidwalsh.name/javascript-function-parameters/feed6Inspect jQuery Element Eventshttps://davidwalsh.name/inspect-jquery-events
https://davidwalsh.name/inspect-jquery-events#respondTue, 15 Nov 2016 03:12:33 +0000https://davidwalsh.name/?p=25961Building on top of other tools can be incredibly difficult, especially when you didn’t create the other tool and you can’t replace that tool. And when those other tools create loads of event listeners, you sometimes see odd behavior within the page and have no idea what the hell is going on. Unfortunately a large part of […]

]]>Building on top of other tools can be incredibly difficult, especially when you didn’t create the other tool and you can’t replace that tool. And when those other tools create loads of event listeners, you sometimes see odd behavior within the page and have no idea what the hell is going on. Unfortunately a large part of client side coding and library usage comes down to fighting your own tools.

Luckily jQuery allows you inspect events that have been registered to a given element! Here’s the magic:

// First argument is the element you want to inspect
jQuery._data(document.body, "events");

What’s returned is an object whose keys represent the event names and the values are arrays of event handles that have been registered to the element and in the order they were registered. You can even inspect the function URL location and its contents, then allowing you to see what code is messing with your page. And then, after you’ve cursed out the other tool, you can monkey patch the problematic function.

Event listeners can really cause debugging misdirection within JavaScript, especially when you aren’t an expert with a given framework. Take the time to learn to leverage as many helper methods as you can — they will save you hours of frustration.

]]>https://davidwalsh.name/inspect-jquery-events/feed0Getting Started with Whitestorm.jshttps://davidwalsh.name/started-whitestormjs
https://davidwalsh.name/started-whitestormjs#respondMon, 14 Nov 2016 15:00:25 +0000https://davidwalsh.name/?p=25957What is whitestorm.js? Whitestorm.js is a framework for developing 3D applications or games that run in browser. This framework is basically a wrapper around Three.js library (like jQuery wraps DOM to make it easier to use). It extends Three.js with simple API and component system to make development easier and better. It uses WebGL to […]

What is whitestorm.js?

is a framework for developing
3D
applications or games that run in browser. This framework is basically a wrapper around
Three.js
library (like
jQuery
wraps
DOM
to make it easier to use). It extends Three.js

with simple API and component system to make development easier and better. It uses
WebGL
to render
3D
, so the application will run even on a smartphone or tablet.

The WebGL canvas will be automatically added to thedocument.body node.
You can change the destination by setting a DOM element to the container
property of the configuration object that we pass to the WHS.World.

WHS.Sphere

Next thing to do is to make a simple sphere that will fall down on a plane. As we already have scene, camera and renderer set up, we can start making the sphere immediately. To make a simple sphere, use the WHS.Sphere
component. It is a special component that wraps THREE.SphereGeometry, mesh and physics.

By default if you use a physics version of whitestorm.js all objects are created as physics objects. If you don’t want to have a physics object — simply add
physics: false
line to sphere config.

]]>https://davidwalsh.name/started-whitestormjs/feed0Convert Websites to Appshttps://davidwalsh.name/convert-websites-apps
https://davidwalsh.name/convert-websites-apps#commentsThu, 03 Nov 2016 15:53:50 +0000https://davidwalsh.name/?p=25951Converting a website to a native app, whether on mobile or desktop, can be quite useful. The problem with bookmarks, especially for software engineers, is that we often need to work in different browsers, so having everything in one browser’s bookmark set can be a pain. I’d also argue that websites with a specific purpose area great case for converting a website […]

Converting a website to a native app, whether on mobile or desktop, can be quite useful. The problem with bookmarks, especially for software engineers, is that we often need to work in different browsers, so having everything in one browser’s bookmark set can be a pain. I’d also argue that websites with a specific purpose area great case for converting a website to desktop app. I recently found nativefier, an open source utility that creates a native desktop app by wrapping the site in Electron.

Installation

You can use NPM to install nativefier and node-icns, which we’ll use to create a custom icon for the app:

A directory named “{appname}-darwin-x64” will be generated and within that directory will be the app file, which you can drag to your Applications folder (or whatever your OS equivalent is) and to your dock. You’ll note that you can add custom user JavaScript and CSS files so that you can hide advertisements, modify colors and behavior, and so on. The —counter argument is particularly interesting — a web app like Gmail that updates its <title> tag as a pseudo-notification will trigger a red notification dot over the app icon when an update is made.

Web apps like IRCCloud and websites like DevDocs are perfect candidates for conversion to desktop app.

]]>Over the past months I’ve detailed how developers can complete a variety of tasks using Cloudinary, an image (and audio and video) hosting, delivery, and transformation provider. Cloudinary’s client-side integration libraries and SDKs simplify the integration with your development platform of choice: Ruby on Rails, PHP, Node.js, Angular, .NET, Python & Django, jQuery, Java, Scala, Android, iOS and more. Once you’ve signed up for a Cloudinary account, you can upload your images using one of their many APIs, but today I’m going to introduce you to their Image Uploader — a method for uploading media from any user directly to Cloudinary, thus bypassing your own servers which can lighten performance and save you time!

Setup

An upload preset string will be provided to you, allowing you to upload images to Cloudinary without exposing your API key.

jQuery Image Uploader

Cloudinary allows developers to upload user-provided (or “unsigned”, since the accountholder isn’t uploading) images directly to Cloudinary — no need to have your server handle any of it. Cloudinary provides a jQuery plugin for all of this work, so add those JavaScript resources to the page:

A $.cloudinary object is added to jQuery which contains a host of useful methods for retrieving and even sending images to Cloudinary; to send uploads to Cloudinary from the client side, first append Cloudinary’s upload widget to your form:

Upload is triggered after a file has been selected (or dragged). An input field is automatically added to your form whose value identifies the uploaded image for referencing in your other code. You will also be provided cloudinarydone and cloudinaryprogress events during the stages of the upload, a la:

Cloudinary’s CORS settings allow for the cross-origin upload and the image uploader supports all major browsers! If you’d like to see the image uploader in action, check out this sample photo album project.

Uploading to Cloudinary within Native Mobile OS

If you provide a mobile app, whether for iOS or Android, you can also use Cloudinary’s API for unsigned image uploads:

Cloudinary has you covered for bulk image upload on both the web, server, and mobile device front — a complete provider!

User-provided content, especially in the case of media, can be difficult to handle. Whether it’s storage, delivery, or optimization, handling these on your own can be a disaster; multiple services, security, optimization, hosting, etc. Cloudinary, with their flexible APIs, can help. Check out Cloudinary today!

]]>https://davidwalsh.name/cloudinary-image-uploader/feed0Six Tiny But Awesome ES6 Featureshttps://davidwalsh.name/es6-features
https://davidwalsh.name/es6-features#commentsMon, 31 Oct 2016 12:13:18 +0000https://davidwalsh.name/?p=25937Everyone in the JavaScript community loves new APIs, syntax updates, and features — they provide better, smarter, more efficient ways to accomplish important tasks. ES6 brings forth a massive wave of new goodies and the browser vendors have worked hard over the past year to get those language updates into their browser. While there are big updates, some of […]

]]>Everyone in the JavaScript community loves new APIs, syntax updates, and features — they provide better, smarter, more efficient ways to accomplish important tasks. ES6 brings forth a massive wave of new goodies and the browser vendors have worked hard over the past year to get those language updates into their browser. While there are big updates, some of the smaller language updates have put a massive smile on my face; the following are six of my favorite new additions within the JavaScript language!

1. Object [key] setting syntax

One annoyance JavaScript developers have had for ages is not being able to set a variable key’s value within an object literal declaration — you had to add the key/value after original declaration:

Wrapping the variable key in [] allows developers to get everything done within one statement!

2. Arrow Functions

You don’t need to have kept up with every ES6 change to know about arrow functions — they’ve been the source of much talk and some confusion (at least initially) to JavaScript developers. While I could write multiple blog posts to explain each facet of the arrow function, I want to point out how arrow functions provide a method for condensed code for simple functions:

No function or return keywords, sometimes not even needing to add () — arrow functions are a great coding shortcut for simple functions.

3. find/findIndex

JavaScript gives developers Array.prototype.indexOf to get the index of a given item within an array, but indexOf doesn’t provide a method to calculate the desired item condition; you also need to search for an exact known value. Enter find and findIndex — two methods for searching an array for the first match of a calculated value:

The awesome added bonus is being able to convert iterable objects (NodeList, arguments, etc.) to true arrays — something we’ve used Array.from or other hacks to do for a long time.

5. Template Literals

Multiline strings within JavaScript were originally created by either concatenation or ending the line with a \ character, both of which can be difficult to maintain. Many developers and even some frameworks started abusing <script> tags to encapsulate multiline templates, others actually created the elements with the DOM and used outerHTML to get the element HTML as a string.

Other languages may throw a warning if arguments without a default value aren’t provided but JavaScript will continue to set those argument values to undefined.

The six features I’ve listed here are just a drop in the bucket of what ES6 provides developers but they’re features we’ll use frequently without thinking anything of it. It’s these tiny additions that oftentimes don’t get attention but become core to our coding.

Did I leave anything out? Let me know what small additions to JavaScript you love!

]]>https://davidwalsh.name/es6-features/feed10stdlib: Create Scalable Node.js Microservices in a Flashhttps://davidwalsh.name/stdlib
https://davidwalsh.name/stdlib#commentsWed, 26 Oct 2016 15:05:38 +0000https://davidwalsh.name/?p=25941Hey everyone – today I have the honor of walking you through using the brand new service registry for microservices, stdlib. You can also check out stdlib on GitHub, which is the open source project we’ll be using to generate service scaffolding and take care of package management. What is stdlib? Easy! It’s a new […]

]]>Hey everyone – today I have the honor of walking you through using the brand
new service registry for microservices, stdlib.
You can also check out stdlib on GitHub,
which is the open source project we’ll be using to generate service scaffolding
and take care of package management.

What is stdlib?

Easy! It’s a new take on software registries and package management
that you’re probably familiar with, like NPM. Instead of focusing on local
installations of software, stdlib allows you to write simple microservices
with multiple functional HTTP endpoints that you access over-the-wire as
remote procedure calls. You can then register these services on the stdlib
central registry, at which point they’ll be discoverable by others (if you
choose to publish them) and also be completely functional web backends. It’s
actually completely cross-compatible with NPM, if you decide you want to
publish your service for local installation as well.

The stdlib registry itself is free to use, for anybody. Instead of others
downloading your code, your service itself remains closed-source, with only
HTTP endpoints exposed. The actual contents downloadable only by you and your
team. Your services can also be run for free, as of writing this article, for up
to 500,000 seconds of computation time.

You can think about using stdlib a little bit like a mix between NPM and Heroku
for microservices, focused on smaller, functional application parts. The
potential is vast, you can build a service with any amount of complexity you’d
like and never worry about scale or managing infrastructure.

Getting Started

To get started with stdlib, you’ll first have to download the command line tools,
available on NPM. You should first have Node 6.X installed, available here, at nodejs.org.

This installs the Developer Preview for CLI tools. There may be newer versions
available, so always check out stdlib on GitHub
when building new microservices. :)

Initializing a Workspace

To create a stdlib workspace that contains all of your in-development
functions, first create a directory you’ll be developing in and then initialize
stdlib.

$ mkdir stdlib
$ cd stdlib
$ stdlib init

When initializing a workspace, you’ll be asked to enter the e-mail address you
used to sign up for the registry. You can skip this step with --no-login,
but it’s not recommended – you won’t be able to push code without it! If you
don’t have an account yet, you’ll be able to create one.

Creating a Service

Creating a service is really simple. No need to create a new directory. You
should do this from the top-level workspace directory.

$ stdlib create

You’ll be asked to add a service name, and a default function name. The
default function will be the “root” web mapping of your service, if no
additional function information is given – though services can support more
than one function.

Anatomy of a Service

Services have four main components;

package.json

package.json is your main service package. This is an NPM-compatible file that
contains a "stdlib" field with the following properties:

"name": The name your service will be published to. Usually in the format<username>/<service>. In order to push your service to the stdlib registry,
you’ll need to have access to the specified <username>.

"defaultFunction": The default entry point to your service if no function
is provided. Services are accessed by <username>/<service>/<function>, so
specifying this field as "main" would make <username>/<service> map to<username>/<service>/main.

"timeout": The timeout, in milliseconds, for service execution. This is the
upperbound of compute time applied to all functions in the service.

"publish": Whether or not to publish publicly to the central registry when
your service is pushed to the cloud. A list of public services can be found atthe stdlib search page.

env.json

A JSON file containing environment variables to be sent to the process.env
variable. While working locally, services will load anything in "dev", and
when you push a release to stdlib, services will use the values contained in"release". Any other environment names exist for staging purposes in the cloud.

function.json

You’ll notice a folder, f/, in your main service directory. If you specified
your default function name as main, you should see:

f/main/function.json
f/main/index.js

In your f/ directory. This is your first functional endpoint!

function.json is a JSON file with a few fields;

"name": The function name. Must match f/<function_path> exactly, or the
registry will throw an error.

"description": A short description of the function.

"args": An array containing information about the arguments the function
expects. More on this in a bit.

"kwargs": An object (key-value pairs) containing information about the
keywords arguments the function expects. More on this in a bit, too.

index.js

This is paired with a function.json file in a function directory. It’s a
simple function of the format;

Where params is an object containing the arguments (params.args) and
keyword arguments (params.kwargs) passed to the function.

callback is a function callback that ends function execution, expecting an
error argument (or null if no error) and a JSON-serializable result (or
a Buffer for file processing).

Using Your Service Locally

The first service you created from stdlib create should have one function
that returns "hello world". To run this function (and test it), first
go to your service directory:

$ cd <username>/<service>

If your username is best_developer and you created a service named test, you
would do:

$ cd best_developer/test

Now simply type:

$ f .
> "hello world"

Amazing! :) stdlib comes packaged with a command-line testing tool called f, that
relies upon the f package from NPM / GitHub to do
microservice testing. If you begin your function path with a ., it will search
for a local function, otherwise it will run a live function, in the cloud.

Note that this execution is equivalent to;

$ f ./main
> "hello world"

If your "defaultFunction" is set to "main".

Registering Your Service in the Cloud

There are two ways to register your service. Either in a staging environment,
or as a release. Staging environments are mutable and can be overwritten at
will. Releases are immutable and can not be overwritten, but they can be torn
down.

Register as a Staging Service

To push your service to the cloud in a staging environment, use:

$ stdlib up <environment>

Where <environment> is the name of your intended staging environment. This
name maps directly to the process.env variables stored in env.json, so
make sure you use the right name. Your local service runs in dev environment,
so that’s a good target to test in the cloud as well:

$ stdlib up dev

This will now register and compile your service. Once complete, you’ll be able
to run your service in the cloud using;

$ f <username>/<service>@dev
> "hello world"

Or, using the example above:

$ f best_developer/test@dev
> "hello world"

You can also access your service over HTTPS via curl or in your browser;

https://f.stdlib.com/best_developer/test@dev

Voila! :)

Register as a Release

Releases are immutable, so make sure you’re confident in what you’re
pushing because though they can be torn down, versions can never be overwritten
(or go “back” by semver standards). To release a service, use;

$ stdlib release

This will register your service with the version specified in package.json.If you would like your service publicly searchable on the registry, set"publish": true in the "stdlib" field of package.json.

Removing a Service

Note that stdlib rollback can also be used as a shortcut to remove the
currently specified release if published by accident.

Restarting or Rebuilding a Service

If your service, once published to the cloud, for any reason stops working
you can try restarting it with;

$ stdlib restart <environment> [-r <version>]

This shouldn’t be necessary, but the option exists should you encounter any
errors.

Additionally, you can also rebuild a service. This reinstalls package
dependencies and uses the most up-to-date version of the stdlib microservice
software. This may be encouraged as we roll out updates, for performance and
security reasons.

$ stdlib rebuild <environment> [-r <version>]

Creating More Service Functions

This will create a new “hello world” function given the newly specified name.
Modify it to your heart’s content! It is not a default function, so it
will need to be accessed using (provided the name new-func);

$ f ./new-func

Arguments and Keyword Arguments

To pass different params.args and params.kwargs values to your function,
use:

Web Browser

The f library can be used identically in the browser. It’s part of thesame GitHub repository and can be installed via
Bower using poly/f.

That’s it!

That’s all you need to get started with stdlib. I definitely hope you enjoy
using it as much as I’ve enjoyed building it. Stay tuned for more articles
showing some sample functions and neat things you can do with microservices.

Good luck, have fun, and happy building!

FAQ

You made it this far, eh? Very thankful for David Walsh for hosting this and
being as excited as I am for the future of the project. :)

Why the name stdlib?

It’s a play on the C Standard Library,
or #include <stdlib.h>. We were wondering what you’d call a registry for
remote procedure calls on the web and stdlib.com just made the most sense!

Why args (arguments) and kwargs (keyword arguments)?

It’s a specification from Python, but it works, makes sense, and maps to a ton
of other languages we’ll be building SDKs for. This way developers can write
functions that only expect unnamed arguments, or can specify the names of the
arguments they’d like to have passed in.

Why use stdlib? Why not, say, AWS Lambda?

stdlib service hosting is actually built on top of AWS Lambda. If that’s what you
prefer, by all means, we don’t want to impede you. stdlib isn’t a replacement
for Lambda, it’s a registry! It’s a different offering altogether that augments
the “server-less” model to ease your mind completely and give you workflows that
your team can organize around so you don’t have to build them yourself. stdlib
provides easier version control, package management, team tooling and other
nifty features.

Who created stdlib?

stdlib was created in a basement by a dog named Ruby. Some say you can still
hear her panting when you register a service. All jokes aside, my name is Keith
Horwood and I’m known for doing Node and open source stuff. (Ruby is my dog though, and
she’s beautiful.) I’m probably best known for authoring the popular open source
API framework, Nodal. You can follow me
on Twitter, @keithwhor but it would be better
if you followed our whole company – @Polybit!

How can I contribute?

Submit a pull request or open an issue on the stdlib GitHub repository.
If you want to help us create the future of “server-less” technology, you can
also apply to work with us at Polybit. We’re new,
we’re growing, and if you’re enthusiastic with good ideas, we want you to join
and tell us how to make everything better :).

]]>https://davidwalsh.name/stdlib/feed2Responsive Images with Client Hintshttps://davidwalsh.name/responsive-images-client-hints
https://davidwalsh.name/responsive-images-client-hints#respondMon, 24 Oct 2016 11:56:25 +0000https://davidwalsh.name/?p=25936It doesn’t take being a performance fanatic to know that images can really slow down a page’s load time. We’ve come a long way when it comes to images, from lazy loading them to using better image formats like WebP, but they all involve loading the same static image URL which may be good for desktops but […]

It doesn’t take being a performance fanatic to know that images can really slow down a page’s load time. We’ve come a long way when it comes to images, from lazy loading them to using better image formats like WebP, but they all involve loading the same static image URL which may be good for desktops but not for mobile devices, and visa versa. We do have srcset with img tags now, but that can be difficult to maintain for dynamic, user-driven websites.

My experiments with Cloudinary have shown me than they have a solution for almost everything when it comes to media. My prior experiments include:

Another new way of optimizing image delivery is called “client hints“: a new set of HTTP request headers sent to the server to provide information about the device, allowing more intelligent output. Here’s the precise explanation of client hints from the standards document:

This specification defines a set of HTTP request header fields, colloquially known as Client Hints, to address this. They are intended to be used as input to proactive content negotiation; just as the Accept header field allows clients to indicate what formats they prefer, Client Hints allow clients to indicate a list of device and agent specific preferences.

Let’s have a look at current “responsive image” hints and then image optimization with client hints!

Responsive Images with CSS

There are currently two ways I use CSS for responsive images. The first is by setting a max-width on images:

img {
max-width: 100%;
}

The second method is by scoping background images with CSS media queries:

Both work each as their own issues: the first method always serves the large image file size regardless of screen size, the second method bloats your CSS (image scoping every image — gross!) and requires the use of a background image.

Responsive Images with JavaScript

There are many more libraries out there that will do the job, but my problem with these JavaScript-based approaches is that they can sometimes add huge weight to the page and they don’t provide a “native” image approach, i.e. you have to wait for the DOM to load, then analyze the images, then set widths and make requests, etc. A more classic approach would be more performant.

<img srcset>

The current method for providing responsive image paths is a bit ugly and can be tedious to create:

Essentially we specify a new image for specified widths in a sort odd single-string format. For this method you need to create separate images or engineer a smart querystring-based system for dynamically generating images. In many cases both options are impractical.

Using Client Hints

The first part of using client hints is providing a single meta tag with the hints you’d like to provide to the server:

<meta http-equiv="Accept-CH" content="DPR, Width">

With the snippet above, we direct the browser to provide width and DPR (device pixel ratio) hints to server during the request to the image. Using Chrome’s “Network” panel we can see those headers being sent:

If we stop and think for a moment, there’s a lot we can do by pulling the Width, DPR, and other hints from their headers:

Store the data so we can analyze patterns and possibly cut different image dimensions

Generate, store, and return a custom image for the given file size

Return a different image type for a given device

The client hint is something we’ve always wanted: a tip from the client as to its size and other visual characteristics! I love that client hints are easy to implement on the client side: add a <meta> tag, add a sizes attribute to your image, and you’re golden. The hard part is the server side: you need to add dynamic, optimized response logic — that’s where Cloudinary can help.

Client Hints with Cloudinary

Cloudinary wants to make creating and managing responsive imagery their problem. Cloudinary offers APIs for many languages (Python, Node.js, etc.), even allowing delivery of dynamic images via a URL. Let’s create an image with an automatic DPR hint:

The w_512,dpr_auto portion of the image URL triggers sending a different image resource to each user based on their context. For browsers that support client hints, 1x devices will receive 1x resources; 2x screens will receive 2x resources; display density triggers a difference in resource delivery.

Now let’s do automatic image width with client hints:

<img src="https://res.cloudinary.com/demo/w_auto,dpr_auto/bike.jpg">

Same effect: w_auto sends a different image size from the same URL based on the client hint — incredibly convenient when creating dynamic content — no need for ugly srcset management!

Let’s break down the code above, specifically the w_auto:100:400 piece:

100 represents the increment by which the image is calculated with relation to the client hint, unless 1 is provided, in which case the image will then be scaled to the exact layout width (this is bad — if the client isn’t a standard device width, performance will be impacted). If the client hint for Width is 444, the image will round up and a 500 pixel image will be returned.

400 represents the fallback image width in the case that the client hints API isn’t supported by the browser or a hint simply isn’t sent (i.e. Width isn’t listed in the <meta> tag). If this argument isn’t provided, the full image size is returned, so if your image is very large (i.e. an original photo), you’ll definitely want to provide this argument.

Unfortunately only Opera and Chrome support client hints at this time, while Firefox and Edge are considering adding client hint support. I will say I find this new advancement a perfect marriage of server and client side communication when it comes to assets and device display. Let’s home client hits are globally adopted — we’ll be able to really tighten up image delivery, especially when you use an awesome service like Cloudinary!

]]>https://davidwalsh.name/responsive-images-client-hints/feed0Node.js Raw Mode with Keystrokeshttps://davidwalsh.name/node-raw-mode
https://davidwalsh.name/node-raw-mode#commentsMon, 17 Oct 2016 11:57:24 +0000https://davidwalsh.name/?p=25934I find the stuff that people are doing with Node.js incredibly interesting. You here about people using Node.js to control drones, Arduinos, and a host of other devices. I took advantage of Node.js to create a Roku Remote, a project that was fun and easier than I thought it would be. There was one piece of this experiment […]

]]>I find the stuff that people are doing with Node.js incredibly interesting. You here about people using Node.js to control drones, Arduinos, and a host of other devices. I took advantage of Node.js to create a Roku Remote, a project that was fun and easier than I thought it would be. There was one piece of this experiment that was difficult, however: listening for keystrokes within the same shell that executed the script.

The process for using the remote is as follows:

Execute the script to connect to your Roku: node remote

In the same shell, use arrow keys and hot keys to navigate the Roku

Press CONTROL+C to kill the script

The following JavaScript code is what I needed to use to both listen for keystrokes within the same shell once the script had been started:

The code above turns your Node.js script into an active wire for listening to keypress events. With my Roku Remote, I pass arrow and letter keypress events directly to the Roku via a REST API (full code here). I love that Node.js made this so easy — another reason JavaScript always wins!

]]>https://davidwalsh.name/node-raw-mode/feed1How to Deliver a Smooth Playback without Interruptions (Buffering)https://davidwalsh.name/cloudinary-video
https://davidwalsh.name/cloudinary-video#respondMon, 10 Oct 2016 12:17:17 +0000https://davidwalsh.name/?p=25926There’s only one thing worse than no internet: unreliable internet. The frustration I feel when one page loads quickly, then the next very slow (if at all), and then a mixture is unmanageable. Like…throw your device across the room frustrating. This slowness is most apparent when trying to play media, more specifically video where it’s visually […]

There’s only one thing worse than no internet: unreliable internet. The frustration I feel when one page loads quickly, then the next very slow (if at all), and then a mixture is unmanageable. Like…throw your device across the room frustrating. This slowness is most apparent when trying to play media, more specifically video where it’s visually janky and the sound is getting cut off and you’re seething with fury.

Last week I wrote about HTML5 Video Best Practices and the awesome utilities provided by Cloudinary to place optimized and configurable videos within your site. Cloudinary lets you customize the video poster, video controls, and even apply filters and transformations to the video itself. Taking a deeper look, you can even control the bitrate and codecs levels, allowing for better customization of video delivery.

Uploading a Video

You can upload a video within the Cloudinary website but let’s have some fun and use the Cloudinary Node.js API to upload a video:

Manipulating Video Quality & Bit Rate

Depending on the device, browser, traffic load, length of video, or a range of other variables, you may want to be able to modify the quality or bit rate of a video on your site. Though quality and bit rate are two separate manipulations, remember that the higher the bit rate, the higher the quality.

Let’s first set a desired quality:

cloudinary.video('sample-video', { quality: 50 });

Setting desired bit rate is just as easy:

cloudinary.video('sample-video', { bit_rate: '250k' });

The API is so easy to use, no surprises!

Adaptive Bitrate Streaming – HLS and MPEG

Adaptive bitrate streaming is a video delivery technique that adjusts the quality of a video stream in real time according to detected bandwidth and CPU capacity. This enables videos to start quicker, with fewer buffering interruptions, and at the best possible quality for the current device and network connection, to maximize user experience.

Cloudinary can automatically generate and deliver all of these files from a single original video, transcoded to either or both of the following protocols:

HTTP Live Streaming (HLS)

Dynamic Adaptive Streaming over HTTP (MPEG-DASH)

Setting up streaming is a multistep (but easy) process — let’s have a look at how to make that happen!

Step One: Select a Streaming Profile

Cloudinary provides a collection of predefined streaming profiles, where each profile defines a set of representations according to suggested best practices.

For example, the 4k profile creates 8 different representations in 16*9 aspect ratio, from extremely high quality to audio only, while the sd profile creates only 3 representations, all in 4:3 aspect ratio. Other commonly used profiles include thehd and full_hd profiles.

Step Two: Upload Your Video with an Eager Transformation Including the Streaming Profile and Format

A single streaming profile is comprised of many derived files, so it can take a while for Cloudinary to generate them all. Therefore, when you upload your video (or later, explicitly), you should include eager, asynchronous transformations with the required streaming profile and video format.

You can even eagerly prepare your videos for streaming in both formats and you can include other video transformations as well. However, make sure the streaming_profile is provided as a separate component of chained transformations.

For example, this upload command encodes the big_buck_bunny.mp4 video to HLS format using the full_hdstreaming profile:

Step Three: Video Delivery

After the eager transformation is complete, deliver your video using the .m3u8 (HLS) or .mpd (MPEG-DASH) file format (extension) and include the streaming_profile (sp_<profilename>) and other transformations exactly matching those you provided in your eager transformation (as per the URL that was returned in the upload response).

]]>https://davidwalsh.name/cloudinary-video/feed0JavaScript Copy to Clipboardhttps://davidwalsh.name/clipboard
https://davidwalsh.name/clipboard#commentsThu, 06 Oct 2016 19:11:19 +0000http://davidwalsh.name/?p=4001“Copy to clipboard” functionality is something we all use dozens of times daily but the client side API around it has always been lacking; some older APIs and browser implementations required a scary “are you sure?”-style dialog before the content would be copied to clipboard — not great for usability or trust. About seven years back I blogged about […]

]]>“Copy to clipboard” functionality is something we all use dozens of times daily but the client side API around it has always been lacking; some older APIs and browser implementations required a scary “are you sure?”-style dialog before the content would be copied to clipboard — not great for usability or trust. About seven years back I blogged about ZeroClipboard, a solution for copying content to the clipboard in a more novel way…

…and by novel way I mean using Flash. Hey — we all hate on Flash these days but functionality is always the main goal and it was quite effective for this purpose so we have to admit it was a decent solution. Years later we have a better, Flash-free solution: clipboard.js.

Events

No Flash with a simple API and working in all major browsers makes clipboard.js a huge win for the web and its users. The days of Flash shimming functionality on the client side are over — long live web technology!

]]>https://davidwalsh.name/clipboard/feed73​Let Top Tech Companies Apply to You​ (Sponsored)https://davidwalsh.name/top-tech-companies-apply
Thu, 06 Oct 2016 12:01:46 +0000https://davidwalsh.name/?p=25927​Indeed Prime upends the traditional hiring hierarchy so you can take control of your job search. Stop applying to jobs. Let jobs apply to YOU. You can sign up for Indeed Prime in a few minutes. Once approved, you will start receiving interview offers. You get salary and equity information upfront, so guesswork is eliminated. […]

​Indeed Prime upends the traditional hiring hierarchy so you can take control of your job search. Stop applying to jobs. Let jobs apply to YOU.

You can sign up for Indeed Prime in a few minutes. Once approved, you will start receiving interview offers. You get salary and equity information upfront, so guesswork is eliminated. You’re in charge when you’re on Indeed Prime. Engage with just the companies you like.

Indeed Prime’s writing team curates your work experience and career goals into an attractive snapshot shopped to our employer network. These industry insiders show you off with high-impact, unique branding. You may not even recognize how good you look.

Stay in charge by responding to companies via our website. Minimal effort — just click interested or not interested. Then use our quick scheduling tool to cut through the hassle of logistics. Engineered by Indeed’s superhuman developers, Indeed Prime’s platform is quick-footed, simple in design and content.

Indeed Prime is launched in San Francisco, New York, Austin, Seattle, Los Angeles, Boston and London (UK). Sign up today!