eninfo@4elements.comCopyright 20192019-01-22T02:01:31+01:00//www.4elements.com/blog/read/short-course-better-angular-app-architecture-with-modules
https://www.4elements.com/blog/read/short-course-better-angular-app-architecture-with-modules#When:11:22:54ZIn all of the holiday rush, maybe you missed a short course we published on the very last day of the year, Better Angular App Architecture With Modules. Learn more about it and watch a free introductory video below.

What You’ll Learn

When you're just starting out with Angular, you'll probably write your app as a single module. This works fine for small apps, but for more complex production apps, the single-module approach will quickly get out of control and make the code hard to maintain.

In this course, Dan Wellman will teach you how to move from a small single-module Angular app to a larger, more complex multi-module architecture.

Among other things, you'll learn about:

adding submodules and feature modules

lazy-loading feature modules

handling services

creating a third-party module

Along the way, you'll see a practical example of how to break an app down into discrete sections in order to minimise and organise the complexity that growth of the codebase brings.

Watch the Introduction

Take the Course

You can take our new course straight away with a subscription to Envato Elements. For a single low monthly fee, you get access not only to this course, but also to our growing library of over 1,000 video courses and industry-leading eBooks on Envato Tuts+.

Plus you now get unlimited downloads from the huge Envato Elements library of 870,000+ creative assets. Create with unique fonts, photos, graphics and templates, and deliver better projects faster.

]]>2019-01-21T11:22:54+00:00//www.4elements.com/blog/read/preview-our-new-course-on-angular-material
https://www.4elements.com/blog/read/preview-our-new-course-on-angular-material#When:13:50:14ZAngular Material makes it easy to create a great UI for your Angular app. See how it works in our new course, Building App UIs With Angular Material. Keep reading for some free videos from the course.

What You’ll Learn

In this course, Dan Wellman will show you how to use Angular and Angular Material to build rich and interactive UIs for your web apps. You'll learn how to set up the library in a new project and how to use each of the main interface components and layout components. You'll also learn how to create forms and use overlay components such as dialogs and tooltips.

Here are a couple of free lessons from this course, as a preview of what you can expect:

Adding Your First Material Component

In this video, you'll learn how to add your first Material component, which will be the Material sidebar component. This is one of the components that you can generate using the Angular CLI, so it makes sense to start here.

Material Form Fields

In this video, you'll learn about the Material form field container, which is the basis for creating styled and dynamic form controls using Angular Material.

Take the Course

You can take our new course straight away with a subscription to Envato Elements. For a single low monthly fee, you get access not only to this course, but also to our growing library of over 1,000 video courses and industry-leading eBooks on Envato Tuts+.

Plus you now get unlimited downloads from the huge Envato Elements library of 870,000+ creative assets. Create with unique fonts, photos, graphics and templates, and deliver better projects faster.

]]>2019-01-09T13:50:14+00:00//www.4elements.com/blog/read/build-your-own-captcha-and-contact-form-in-php
https://www.4elements.com/blog/read/build-your-own-captcha-and-contact-form-in-php#When:17:11:11ZPeople write code every day to automate a variety of processes. We exploit the fact that computers are a lot faster and more accurate than humans, which lets us simplify a lot of mundane tasks. Unfortunately, these same abilities can be used to program computers to do something malicious like sending spam or guessing passwords. The focus of this tutorial will be on combating spam.

Let's say you have a website with a contact form to make it easy for visitors to contact you. All they have to do is fill out a form and hit the send button to let you know about a problem or request they have. This is an important feature of a public-facing website, but the process of filling out form values can be automated by malicious users to send a lot of spam your way. This type of spamming technique is not limited to just contact forms. Bots can also be used to fill your forums with spam posts or comments that link to harmful websites.

One way to solve this problem is to devise a test which can distinguish between bots which are trying to spread spam and people who legitimately want to contact you. This is where CAPTCHAs come in. They generally consist of images with a random combination of five or six letters written on a colored background. The idea is that a human will be able to read the text inside the image, but a bot won't. Checking the user-filled CAPTCHA value against the original can help you distinguish bots from humans. CAPTCHA stands for "completely automated public Turing test to tell computers and humans apart".

In this tutorial, we will learn how to create our own CAPTCHAs and then integrate them with the contact form we created in the tutorial.

The $permitted_chars variable stores all the characters that we want to use to generate our CAPTCHA string. We are only using capital letters in the English alphabet to avoid any confusion that might arise due to letters or numbers that might look alike. You can use any set of characters that you like to increase or decrease the difficulty of the CAPTCHA.

Our function creates a five-letter string by default, but you can change that value by passing a different parameter to the generate_string() function.

Render the CAPTCHA Background

Once we have our random string, it's time to write the code to create the background of the CAPTCHA image. The image will be 200 x 50 pixels in size and will use five different colors for the background.

We begin with random values for the variables $red, $green, and $blue. These values determine the final color of the image background. After that, we run a for loop to create progressively darker shades of the original color. These colors are stored in an array. The lightest color is the first element of our $colors array, and the darkest color is the last element. The lightest color is used to fill the whole background of the image.

In the next step, we use a for loop to draw rectangles at random locations on our original image. The thickness of the rectangles varies between 2 and 10, while the color is chosen randomly from the last four values of our $colors array.

Drawing all these rectangles adds more colors to the background, making it a little harder to distinguish the foreground of the CAPTCHA string from the background of the image.

Your CAPTCHA background should now look similar to the following image.

Render the CAPTCHA String

For the final step, we just have to draw the CAPTCHA string on our background. The color, y-coordinate, and rotation of individual letters is determined randomly to make the CAPTCHA string harder to read.

As you can see, I'm using some fonts I downloaded from Google to get variation in the characters. There is a padding of 15 pixels on both sides of the image. The leftover space—170 pixels—is divided equally among all the CAPTCHA letters.

After rendering the text string above the background, your result should look similar to the image below. The characters will be different, but they should be slightly rotated and a mix of black and white.

The fonts that you want to use will go into the fonts directory. Now, you simply have to add the following HTML code above the Send Message button from our previous tutorial on creating a contact form in HTML and PHP.

Sometimes, the CAPTCHA text will be hard to read even for humans. In these situations, we want them to be able to ask for a new CAPTCHA in a user-friendly manner. The redo icon above helps us do exactly that. All you have to do is add the JavaScript below on the same page as the HTML for the contact form.

After integrating the CAPTCHA in the form and adding a refresh button, you should get a form that looks like the image below.

The final step in our integration of the CAPTCHA we created with the contact form involves checking the CAPTCHA value input by users when filling out the form and matching it with the value stored in the session. Update the contact.php file from the previous tutorial to have the following code.

We updated this file to first check if the CAPTCHA value stored in the session is the same as the value input by the user. If they are different, we simply tell the visitors that they entered an incorrect CAPTCHA. You can handle the situation differently based on what your project needs.

Final Thoughts

In this tutorial, we created our own CAPTCHA in PHP from scratch and integrated it with a PHP contact form we built in one of our earlier tutorials. We also made the CAPTCHA more user-friendly by adding a refresh button so that users get a new string with a new background in case the previous one was unreadable.

You can also use the logic from this tutorial to create a CAPTCHA that relies on solving basic mathematical equations like addition and subtraction.

If you want to add a CAPTCHA to your website, you should check out some of the form and CAPTCHA plugins available from CodeCanyon. Some of these have CAPTCHA and many other features like a file uploader built in.

If you have any questions or suggestions, feel free to let me know in the comments. You should also take a look at this list of best PHP contact forms.

]]>2019-01-08T17:11:11+00:00//www.4elements.com/blog/read/object-oriented-php-with-classes-and-objects
https://www.4elements.com/blog/read/object-oriented-php-with-classes-and-objects#When:04:06:48ZIn this article, we're going to explore the basics of object-oriented programming in PHP. We'll start with an introduction to classes and objects, and we'll discuss a couple of advanced concepts like inheritance and polymorphism in the latter half of this article.

What Is Object-Oriented Programming (OOP)?

Object-oriented programming, commonly referred to as OOP, is an approach which helps you to develop complex applications in a way that's easily maintainable and scalable over the long term. In the world of OOP, real-world entities such as Person, Car, or Animal are treated as objects. In object-oriented programming, you interact with your application by using objects. This contrasts with procedural programming, where you primarily interact with functions and global variables.

In OOP, there's a concept of "class", which is used to model or map a real-world entity to a template of data (properties) and functionality (methods). An "object" is an instance of a class, and you can create multiple instances of the same class. For example, there is a single Person class, but many person objects can be instances of this class—dan, zainab, hector, etc.

The class defines properties. For example, for the Person class, we might have name, age, and phoneNumber. Then each person object will have its own values for those properties.

You can also define methods in the class that allow you to manipulate the values of object properties and perform operations on objects. As an example, you could define a save method which saves the object information to a database.

What Is a PHP Class?

A class is a template which represents a real-world entity, and it defines properties and methods of the entity. In this section, we’ll discuss the basic anatomy of a typical PHP class.

The best way to understand new concepts is with an example. So let's have a look at the Employee class in the following snippet, which represents the employee entity.

The class Employee statement in the first line defines the Employee class. Then, we go on to declare the properties, the constructor, and the other class methods.

Class Properties in PHP

You could think of class properties as variables that are used to hold information about the object. In the above example, we’ve defined three properties—first_name, last_name, and age. In most cases, class properties are accessed via instantiated objects.

These properties are private, which means they can only be accessed from within the class. This is the safest access level for properties. We’ll discuss the different access levels for class properties and methods later in this article.

Constructors for PHP Classes

A constructor is a special class method which is called automatically when you instantiate an object. We’ll see how to instantiate objects in the next couple of sections, but for now you just have to know that a constructor is used to initialize object properties when the object is being created.

You can define a constructor by defining the __construct method.

Methods for PHP Classes

We can think of class methods as functions that perform specific actions associated with objects. In most cases, they are used to access and manipulate object properties and perform related operations.

In the above example, we’ve defined the getLastName method, which returns the last name associated with the object.

So that’s a brief introduction to the class structure in PHP. In the next section, we’ll see how to instantiate objects of the Employee class.

What Is an Object in PHP?

In the previous section, we discussed the basic structure of a class in PHP. Now, when you want to use a class, you need to instantiate it, and the end result is an object. So we could think of a class as a blueprint, and an object is an actual thing that you can work with.

In the context of the Employee class which we've just created in the previous section, let's see how to instantiate an object of that class.

You need to use the new keyword when you want to instantiate an object of any class along with its class name, and you'll get back a new object instance of that class.

If a class has defined the __construct method and it requires arguments, you need to pass those arguments when you instantiate an object. In our case, the Employee class constructor requires three arguments, and thus we've passed these when we created the $objEmployee object. As we discussed earlier, the __construct method is called automatically when the object is instantiated.

Next, we've called class methods on the $objEmployee object to print the information which was initialized during object creation. Of course, you can create multiple objects of the same class, as shown in the following snippet.

The following image is a graphical representation of the Employee class and some of its instances.

Simply put, a class is a blueprint which you can use to create structured objects.

Encapsulation

In the previous section, we discussed how to instantiate objects of the Employee class. It's interesting to note that the $objEmployee object itself wraps together properties and methods of the class. In other words, it hides those details from the rest of the program. In the world of OOP, this is called data encapsulation.

Encapsulation is an important aspect of OOP that allows you to restrict access to certain properties or methods of the object. And that brings us to another topic for discussion: access levels.

Access Levels

When you define a property or a method in a class, you can declare it to have one of these three access levels—public, private, or protected.

Public Access

When you declare a property or a method as public, it can be accessed from anywhere outside the class. The value of a public property can be modified from anywhere in your code.

As you can see in the above example, we've declared the name property to be public. Hence, you can set it from anywhere outside the class, as we've done here.

Private Access

When you declare a property or a method as private, it can only be accessed from within the class. This means that you need to define getter and setter methods to get and set the value of that property.

If you try accessing a private property from outside the class, it'll throw the fatal error Cannot access private property Person::$name. Thus, you need to set the value of the private property using the setter method, as we did using the setName method.

There are good reasons why you might want to make a property private. For example, perhaps some action should be taken (updating a database, say, or re-rendering a template) if that property changes. In that case, you can define a setter method and handle any special logic when the property is changed.

Protected Access

Finally, when you declare a property or a method as protected, it can be accessed by the same class that has defined it and classes that inherit the class in question. We'll discuss inheritance in the very next section, so we'll get back to the protected access level a bit later.

Inheritance

Inheritance is an important aspect of the object-oriented programming paradigm which allows you to inherit properties and methods of other classes by extending them. The class which is being inherited is called the parent class, and the class which inherits the other class is called the child class. When you instantiate an object of the child class, it inherits the properties and methods of the parent class as well.

Let's have a look at the following screenshot to understand the concept of inheritance.

In the above example, the Person class is the parent class, and the Employee class extends or inherits the Person class and so is called a child class.

Let's try to go through a real-world example to understand how it works.

The important thing to note here is that the Employee class has used the extends keyword to inherit the Person class. Now, the Employee class can access all properties and methods of the Person class that are declared as public or protected. (It can't access members that are declared as private.)

In the above example, the $employee object can access getName and setName methods that are defined in the Person class since they are declared as public.

Next, we've accessed the callToProtectedNameAndAge method using the getNameAndAge method defined in the Employee class, since it's declared as protected. Finally, the $employee object can't access the callToPrivateNameAndAge method of the Person class since it's declared as private.

On the other hand, you can use the $employee object to set the age property of the Person class, as we did in the setAge method which is defined in the Employee class, since the age property is declared as protected.

So that was a brief introduction to inheritance. It helps you to reduce code duplication, and thus encourages code reusability.

Polymorphism

Polymorphism is another important concept in the world of object-oriented programming which refers to the ability to process objects differently based on their data types.

For example, in the context of inheritance, if the child class wants to change the behavior of the parent class method, it can override that method. This is called method overriding. Let's quickly go through a real-world example to understand the concept of method overriding.

As you can see, we've changed the behavior of the formatMessage method by overriding it in the BoldMessage class. The important thing is that a message is formatted differently based on the object type, whether it's an instance of the parent class or the child class.

(Some object-oriented languages also have a kind of method overloading that lets you define multiple class methods with the same name but a different number of arguments. This isn't directly supported in PHP, but there are a couple of workarounds to achieve similar functionality.)

Conclusion

Object-oriented programming is a vast subject, and we've only scratched the surface of its complexity. I do hope that this tutorial helped you get you started with the basics of OOP and that it motivates you to go on and learn more advanced OOP topics.

Object-oriented programming is an important aspect in application development, irrespective of the technology you're working with. Today, in the context of PHP, we discussed a couple of basic concepts of OOP, and we also took the opportunity to introduce a few real-world examples.

ARIA is an important feature that web developers can use to help make their sites more accessible. In previous pieces, we talked about how you can implement ARIA, whether you’re doing so on an eCommerce site or in more niche places.

So far, the focus of this series has been on how to implement ARIA—for example, how to add a role to an element or how to code more accessible forms. With this piece, the focus is going to shift a bit towards other aspects of online accessibility, like why and when we should use ARIA. We’ll also cover some common questions asked on previous posts throughout the series.

Alright, let’s begin!

What Is ARIA?

At its base, ARIA is an extension to current web development languages (mainly HTML) that allows for enhanced accessibility to end users.

What exactly does that mean, though?

Extending HTML With ARIA

HTML has some shortcomings when it comes to how elements are defined and how elements can be related to one another.

In many cases, HTML elements (such as the <div> tag) are too broadly defined to be useful to someone navigating a site with a screen reader, or an element may have too many possible meanings to be interpreted (e.g. an image could also be used as a button). ARIA adds additional attributes to HTML elements to allow for definitions that can be layered on top of the already existing markup language, adding clarity as needed.

The second major benefit is the relation of elements. With HTML, every element exists as a child and/or a parent of another element. But this structure doesn't capture all semantic relationships. This can lead to scenarios like a controller and the element it controls not being clearly associated if they are placed in separate div containers. This becomes increasingly important in complex site structures or when altering the DOM using JavaScript.

Beyond those two benefits, there are a host of others that tend to get less attention, but provide excellent functionality nonetheless.

ARIA for Advanced Accessibility

Although much of the web is pushing towards easier-to-use UX, ARIA fills an important role for others who may not be able to simplify their structure. There are cases where a website might require tree controls, such as those commonly used by filesystems. By providing additional structure, ARIA makes these more advanced controls available to people who might not be able to access them otherwise—especially for users seeking to create a site with drag-and-drop capabilities.

Another key capability is that ARIA doesn’t just extend HTML—it can also be used to make other, less accessibility-friendly technologies available to more users. This is often the case for people using AJAX or DHTML for their web applications.

And really, those are just scratching the surface. If you’re interested in finding out more about the capabilities, attributes, and other useful parameters, take a look at the WAI-ARIA overview.

Why Should We Use ARIA?

The biggest boon of using WAI-ARIA is that it increases accessibility on your site. Users that rely on screen readers, have low vision, or use alternative interfaces for the web benefit greatly from the implementation—and in some cases, they may not be able to use a website to its full extent without it. For many, accessibility is seen as a core value of the web, and as developers, we should strive to provide it wherever possible.

Beyond the best practices aspect, there is also a business incentive to implement ARIA. With 2-3% of Americans having some form of low vision, there is a significant portion of most markets that would benefit from use of the standard. In addition, having an implementation in place now will be beneficial as adoption of ARIA increasingly grows among non-standard web interfaces and creeps into devices such as smart speakers.

Best Practices for ARIA Implementation

Until now, we’ve focused on the actual methods of implementation in this series. With the basics of how to add ARIA to your site in place now, let’s take a look at some key guidelines for putting together your own implementation.

Using a Light Touch Approach

A good implementation of ARIA doesn’t need additional attributes and roles at every opportunity.

When possible, using the least amount of additional code to convey the necessary information is ideal. As an example, the overuse of ARIA is similar to making your site’s homepage a full sitemap for a visual user. People using screen readers will be overwhelmed by the amount of notifications and additional markup on a page that overutilizes ARIA, achieving the opposite of what we were seeking.

Providing enough additional markup to make your important elements’ context clear is the goal, and anything beyond that is probably not necessary.

Avoiding Redundancy (Especially When Using HTML5)

Throughout the previous posts in this series, we talked about the role attribute quite a bit. We placed it in every place that we needed the user to be aware of. There’s an important exception to this, however, that we didn’t talk about before, and that’s the need to avoid redundancy.

Whenever an element that you want to add ARIA to already clearly defines what it is, then you can skip adding ARIA to it. Why is this? Because ARIA is meant to extend existing code to make it more readable, but in some cases, code is already clear, structured, and easy to understand.

This happens frequently for people using elements introduced in HTML5. For example, when using the <button> element, you no longer need to add in the attribute role="button", since the role is already explicitly defined by the HTML code.

Testing With a Screen Reader

Another key to creating a good ARIA implementation is to make sure that you are testing your site with a screen reader or two. You’re likely to be surprised at the details of how they function and how easy it is to make your website annoying to use by accident.

Many ARIA attributes can create notifications or alerts for the end user, and if you are utilizing an aria-live content area that changes every 10 seconds, it's possible that the implementation is making it more difficult to use your website.

Iterative Accessibility for the Web

When adding additional accessibility measures to your site, it's important to remember that it isn’t an all or nothing task. You don’t have to completely deck out your entire website in one attempt, and iterative additions are probably the best way to go.

Starting with major content areas and navigations and then spreading throughout the site slowly is a solid strategy, completing more as time goes on. Even if you schedule 15 or 30 minutes each month to add just a bit of accessibility to your site, it’s a step in the right direction.

Good UX Is Good Accessibility (And Vice Versa)

ARIA isn’t a cure-all for accessibility issues. It is crucial to place a heavy focus on good UX design, especially when it comes to text readability, as another tool in your accessibility toolkit.

If you’d like to delve deeper into the specifics of how UX and development (outside of ARIA) can be wielded to improve accessibility, take a look at the Web Content Accessibility Guidelines.

Frequently Asked Questions About ARIA

This series on ARIA resonated with a lot of readers and sparked discussion in the community. Let’s keep it up! Increasing awareness and improving these tutorials is the best way to bring accessibility to the forefront.

Here are some of the most common questions and commentary that popped up around the web:

What Browsers Support WAI-ARIA?

A bunch! Most modern browsers support the features of ARIA to some extent, though this slightly changes from browser to browser. If you want to see specifics for each browser, you can use a tool like Can I Use?

Can ARIA Be Used With WordPress?

Yep! A few WordPress themes already have ARIA integrated, but you can add it to any theme that you can edit the source code for. In addition, you can also use ARIA with almost any Content Management System!

What Happens If ARIA Isn’t Supported?

Since ARIA doesn’t affect rendering, nothing! In most cases, if a device doesn’t support ARIA, it’ll just be ignored entirely.

Make the Web a Better Place!

Putting all of the articles from this series together, I hope you now have all of the tools needed to improve the accessibility of your site!

If you have any questions, feedback, or have a correction for something I’ve said, please let me know in the comments!

]]>2018-12-03T01:01:57+00:00//www.4elements.com/blog/read/accessible-apps-barriers-to-access-and-getting-started-with-accessibility
https://www.4elements.com/blog/read/accessible-apps-barriers-to-access-and-getting-started-with-accessibility#When:00:30:32ZToday, we are highlighting accessibility—something we strive to think about every day here at Envato Tuts+. December 3 is International Day of Persons With Disabilities. Created by the United Nations in 1992, this day seeks to promote the rights and well-being of persons with disabilities in all spheres of society. More than one billion people worldwide live with some form of disability.

In the context of web and app development, the goal of accessibility is that your tool works for all people, regardless of the hardware or software they are using, and wherever they fall on the spectrum of hearing, movement, visual, and cognitive ability. With rapid changes in digital and assistive technology, meeting this goal requires thought, testing, and an overall understanding of the way online tools are used by different people with diverse needs.

There are tools here at Envato Tuts+ and across the web that can help you learn how to design and code for accessibility, and I will link to some in this article. But in this post, I would like to look at the role of developers in web accessibility and talk about why the best time to think about accessibility is at the beginning of a project. I'll also introduce some emerging issues around developing for accessibility, and raise considerations around barriers to access and advocate for the importance of engaging with users at different stages of the development process.

Thinking About Barriers to Access

The UN’s Convention on the Rights of Persons with Disabilities points out that the existence of barriers is a central feature of disability—that disability is an evolving relationship between people with impairments and the social and environment barriers with which they interact. Barriers are in themselves disabling, and exclusion is a structural problem that lives in our systems, rather than in the bodies of those with impairments. Because of this, removing barriers is a prerequisite to social inclusion for all people.

Let's think a bit about accommodations and barriers to access.

Last week, a friend pointed out that light bulbs are an assistive device for people who rely on their vision to get around. Using light bulbs in a building is a way to mitigate this barrier, since sighted people need light to navigate the world, but people without sight do not need this accommodation, as they navigate using other strategies. What we consider a “normal” accommodation is socially conditioned, rather than an objective truth.

I bring up this example to disrupt the idea of “normal” and move away from thinking of accessible design as a special accommodation. If we prioritize making our technology barrier-free, rather than thinking about accessibility as an exception or afterthought, we can shift the concept of “normal” to accommodate all people and exclude none.

The strength of web technology is that, by its nature, it removes barriers that exist in the physical world. When building web and mobile app technology, it is vital that we not add barriers back in to the technology through the way we design our tools. In order to do this, we have to understand how different people use our tools and what their needs are. And just like when considering whether to build a ramp or a set of stairs, the best time to think about this is before we start building.

Accessibility: Getting Started

When it comes to building accessible web tools, there are two main considerations: take advantage of existing accessibility infrastructure, and stay out of the way of assistive devices and other accessibility strategies.

Using alt text for all non-text elements (images, graphs, charts, etc.) is an example of how you can take advantage of existing infrastructure. Screen readers rely on alt text to parse web content for visitors with visual impairments. This is not a complicated fix—alt text is simply good design. By designing for accessibility, you will improve the functionality of your website. In this case, search engines rely on alt text to better “read” websites. According to the W3, case studies show that accessible websites have better search results and increased audience reach.

Ensuring that keyboard input works with your tool is an example of staying out of the way of users’ accessibility strategies. Using a mouse requires a degree of fine motor control that many people do not have, so they rely on keyboard input to navigate websites and apps. If your web tool can be navigated with keyboard input, it also allows assistive technologies that mimic the keyboard, such as speech input, to function properly. If you build a tool that cannot be navigated with keyboard input, you are unnecessarily creating an inaccessible environment for users.

Knowing how different people interact with your web tool gives you the ability to make choices that support their accessibility strategies. It is so much better to do this at the beginning of a project than try to address accessibility as an afterthought. An illustrative example: UC Berkeley ran into trouble when they made thousands of uncaptioned videos—inaccessible to people with hearing impairments—available online. The university was legally required to caption the content, but did not want to pay for the expensive project, and eventually cancelled the project outright.

By making websites more accessible, you make your web tools work better, more of the time, for more people. The power you have as a developer allows you to address accessibility issues at the earliest stages, when it is the easiest and least expensive time to do so.

ARIA (the Accessible Rich Internet Applications Suite) is a tool to make web content and apps more accessible, especially for people who use screen readers or cannot use a mouse. The main purpose of ARIA is to allow for advanced semantic structure within HTML as a counterpart to HTML’s syntax-heavy nature. In other words, HTML tells the browser where things go, and ARIA tells it how they interact. In this series, you'll learn how to use ARIA to make your web apps more accessible.

It's important to make sure your checkboxes and radio buttons remain accessible to assistive technology (AT) and keyboard users. In this post, you'll learn how to make sure your custom checkboxes and radio buttons are accessible.

User Testing Is Everything

You’ve thought about accessibility. You are committed to removing barriers to access in your web and mobile apps. You’ve built a tool with up-to-date accessibility guidelines in mind. Is your app accessible?

There are guides on Tools and Tips for Testing for Accessibility, and that is an important part of the design process. Reading the guide and testing your web tool is a great idea. But go further: the gold standard in designing for accessibility is user testing. Involve users early in your project and throughout the development process.

Some accessibility requirements are easy to meet, and some are more challenging. Understanding how different people use your tools will give you so much insight into how to build for accessibility. Everyone has different needs, different browsers, different assistive devices. No guide or checklist is going to be able to fully capture the breadth of experience of the people using your tool.

Like learning that users are receiving error reports or a 404 page, be grateful if and when you receive feedback that your tool is not currently meeting a user’s accessibility needs. Solicit this kind of feedback. Keep an open dialog with users and find solutions to the issues they bring to your attention.

Anything you build will evolve—nothing is static in web and mobile technology. The real-life experience of your users is the most valuable input you can receive, so if you hear something is not working, say thank you. And then find a way to make it work.

Conclusion

Thank you for spending some time with me on this International Day of Persons With Disabilities. My intention with this article is to support the dialog around web and mobile accessibility and to give you a starting point for your own thoughts, research, and testing. Whether you are just getting started with programming or are an experienced programmer, it is so vital that you are prioritizing accessibility in your development projects, so that your tools can work well for each person who uses them.

]]>2018-12-03T00:30:32+00:00//www.4elements.com/blog/read/dramatically-speed-up-your-react-front-end-app-using-lazy-loading
https://www.4elements.com/blog/read/dramatically-speed-up-your-react-front-end-app-using-lazy-loading#When:13:00:12ZA constant challenge faced by front-end developers is the performance of our applications. How can we deliver a robust and full-featured application to our users without forcing them to wait an eternity for the page to load? The techniques used to speed up a website are so numerous that it can often be confusing to decide where to focus our energy when optimising for performance and speed.

Thankfully, the solution isn't as complicated as it sometimes might seem. In this post, I'll break down one of the most effective techniques used by large web apps to speed up their user experience. I'll go over a package to facilitate this and ensure that we can deliver our app to users faster without them noticing that anything has changed.

What Does It Mean for a Website to Be Fast?

The question of web performance is as deep as it is broad. For the sake of this post, I'm going to try and define performance in the simplest terms: send as little as you can as fast as you can. Of course, this might be an oversimplification of the problem, but practically speaking, we can achieve dramatic speed improvements by simply sending less data for the user to download and sending that data fast.

For the purpose of this post, I'm going to focus on the first part of this definition—sending the least possible amount of information to the user's browser.

Invariably, the biggest offenders when it comes to slowing down our applications are images and JavaScript. In this post, I'm going to show you how to deal with the problem of large application bundles and speed up our website in the process.

React Loadable

React Loadable is a package that allows us to lazy load our JavaScript only when it's required by the application. Of course, not all websites use React, but for the sake of brevity I'm going to focus on implementing React Loadable in a server-side rendered app built with Webpack. The final result will be multiple JavaScript files delivered to the user's browser automatically when that code is needed. If you want to try out the completed code, you can clone the example source code from our GitHub repo.

Using our definition from before, this simply means we send less to the user up front so that data can be downloaded faster and our user will experience a more performant site.

1. Add React Loadable to Your Component

I'll take an example React component, MyComponent. I'll assume this component is made up of two files, MyComponent/MyComponent.jsx and MyComponent/index.js.

In these two files, I define the React component exactly as I normally would in MyComponent.jsx. In index.js, I import the React component and re-export it—this time wrapped in the Loadable function. Using the ECMAScript import feature, I can indicate to Webpack that I expect this file to be dynamically loaded. This pattern allows me to easily lazy load any component I've already written. It also allows me to separate the logic between lazy loading and rendering. That might sound complicated, but here's what this would look like in practice:

I've now introduced React Loadable into MyComponent. I can add more logic to this component later if I choose—this might include introducing a loading state or an error handler to the component. Thanks to Webpack, when we run our build, I'll now be provided with two separate JavaScript bundles: app.min.js is our regular application bundle, and myComponent.min.js contains the code we've just written. I'll discuss how to deliver these bundles to the browser a little later.

2. Simplify the Setup With Babel

Ordinarily, I'd have to include two extra options when passing an object to the Loadable function, modules and webpack. These help Webpack identify which modules we should be including. Thankfully, we can obviate the need to include these two options with every component by using the react-loadable/babel plugin. This automatically includes these options for us:

The first step is going to be to instruct React Loadable that I want all modules to be preloaded. This allows me to decide which ones should be loaded immediately on the client. I do this by modifying my server/index.js file like so:

The next step is going to be to push all components I want to render to an array so we can later determine which components require immediate loading. This is so the HTML can be returned with the correct JavaScript bundles included via script tags (more on this later). For now, I'm going modify my server file like so:

Every time a component is used that requires React Loadable, it will be added to the modules array. This is an automatic process done by React Loadable, so this is all that's required on our part for this process.

Now we have a list of modules that we know will need to be rendered immediately. The problem we now face is mapping these modules to the bundles that Webpack has automatically produced for us.

4. Mapping Webpack Bundles to Modules

So now I've instructed Webpack to create myComponent.min.js, and I know that MyComponent is being used immediately, so I need to load this bundle in the initial HTML payload we deliver to the user. Thankfully, React Loadable provides a way for us to achieve this, as well. In my client Webpack configuration file, I need to include a new plugin:

The loadable-manifest.json file will provide me a mapping between modules and bundles so that I can use the modules array I set up earlier to load the bundles I know I'll need. In my case, this file might look something like this:

5. Including Bundles in Your HTML

The final step in loading our dynamic bundles on the server is to include these in the HTML we deliver to the user. For this step, I'm going to combine the output of steps 3 and 4. I can start by modifying the server file I created above:

6. Load the Server-Rendered Bundles on the Client

The final step to using the bundles that we've loaded on the server is to consume them on the client. Doing this is simple—I can just instruct React Loadable to preload any modules it's found to be immediately available:

Conclusion

Following this process, I can split my application bundle into as many smaller bundles as I need. In this way, my app sends less to the user and only when they need it. I've reduced the amount of code that needs to be sent so that it can be sent faster. This can have significant performance gains for larger applications. It can also set smaller applications up for rapid growth should the need arise.

]]>2018-11-15T13:00:12+00:00//www.4elements.com/blog/read/15-best-modern-javascript-admin-templates-for-react-angular-and-vue.js1
https://www.4elements.com/blog/read/15-best-modern-javascript-admin-templates-for-react-angular-and-vue.js1#When:13:41:58ZAre you building an app and looking for tools that can help you streamline your build? Take the effort out of your next front-end app build with one of these powerful admin templates.

Whether you prefer to work with React, Angular, or Vue.js, there are a range of templates available on ThemeForest that make it painless to create beautiful, interactive UIs. Built using cutting-edge technology, these templates offer flexibility and dependability for your app build. Create a stunning UI easily by selecting from modular components and clean layouts so that you can focus on the business logic of your app build.

React Admin Templates

React is a JavaScript library for building user interfaces that has taken the web development world by storm. React is known for its blazing-fast performance and has spawned an ecosystem of thousands of related modules on NPM, including many tooling options.

These admin templates and dashboards are a great starting point for your next React app.

Isomorphic is a React and Redux-powered single-page admin dashboard. It's based on a progressive web application pattern and is highly optimized for your next React app. With no need to install or configure tools like Webpack or Babel, you can get started building your app immediately.

This template helps you write apps that behave consistently, run properly in different environments, and are easy to test. With Sass and CSS styling modules, multilingual support, a built-in Algolia search tool, Firestore CRUD, and easy-to-integrate code, you can use this template to build anything you want.

User justinr1234 says:

“Easily the most well-designed template using React out there, from both a code and design perspective. Integrating the code off the shelf was a breeze. If you have an existing app or are looking to roll a new one on the front end, this template successfully solves the problem for either use case. Excellent product!”

Are you building a single-page app and interested in moving to React and Redux? Don’t start from scratch—build a scalable, highly polished admin app with this React, Redux, Bootstrap, and Ant Design template that works well on mobile, tablet, and desktop.

Clean UI React is create-react-app based, so getting started is simple. Modular code allows you to add and remove components with ease. Developer friendly and highly customizable, this template includes 9 example apps, more than 50 pages, multiple layout options with easy-to-update Sass or CSS styling, and ample reusable React components.

User hermanaryadinata says:

“The quality is incredibly high and the flexibility is limitless! Highly recommended to buy!”

Kick-start your app project with Jumbo React, a complete admin template. This product includes two React templates, one based on Google Material Design and the other on the stunning flat style. Each template comes with multiple design concepts and hundreds of UI components and widgets, as well as an internationalization feature that allows you to develop a multilingual app.

Think of this template package as a starter kit to build your app. With it, you can develop a scalable React app rapidly and effectively and save yourself time and money in the process.

User Ace_Cooper says:

“Love the amount of components out-of-the-box. Right what I needed to jump start a new project.”

Looking for a template to get your React project started? Fuse is a complete admin template that follows Google’s Material Design guidelines and will allow you to learn some of the advanced aspects of React while you build your app.

This template uses Material UI as the primary UI library and Redux for state management. It comes with built-in page templates, routing, and authorization features, along with five example apps, more than 20 pages, and lots of reusable React components.

User DevX101 says:

“Very well organized template ready for building a real app. Not just visual templates, but includes authorization and modular design. Great starter kit.”

Angular Admin Templates

Angular is more than just the next version of a popular front-end framework. Angular takes all the best parts of AngularJS and improves them. It's a powerful and feature-complete framework that you can use to build fast, professional web apps.

Check out these templates that you can use to get your next Angular app off on the right foot with clean code and great design.

This best-selling template is a 3-in-1 bundle, with Angular 7+, Bootstrap 4, and 21 layered PSD designs. Fuse is based on Google Material Design and comes with AoT compiler support, as well as a complete NgRx example app. This template includes configurable layouts, a skeleton project, built-in apps such as calendar, e-commerce, mail, and chat, and more than 20 pages to get you started.

Fuse supports all modern browsers (Chrome, Firefox, Safari, Edge) and comes with Bootstrap 4, HTML, and CSS versions, along with the Angular app.

User haseeb90 says:

“This is a great theme. Comes with pre-built apps that you just need to plug your logic and back end into. The code quality is great and stays up-to-date with the latest Angular versions.”

Pages is the simplest and fastest way to build a web UI for your dashboard or app. This beautifully designed UI framework comes with hundreds of customizable features, which means that you can style every layout to look exactly the way you want it to.

Pages is built with a clean, intuitive, and fully responsive design that works on all major browsers and devices. Featuring developer-friendly code, full Sass and RTL support, and five unique dashboard layouts, this Angular 5+ ready template boasts flawless design and a 5-star rating.

Clip-Two is an advanced, fully responsive admin template built with AngularJS. AngularJS, the original version of the popular Angular framework, lets you extend the HTML vocabulary. The resulting environment is expressive, readable, and quick to develop in.

Using a Bootstrap UI, Clip-Two is mobile-friendly and comes with ready-to-customize themes with six different skins and infinite styles with SASS. This template includes features like four level sidebar menus, CSS3 page transitions, custom scrollbar for vertical scrollable content, dynamic pagination, and RTL functionality.

User hafizminhas says:

“This is one of the most outstanding Angular 1.x templates available in the market.”

Apex is a powerful and flexible admin template based on Angular 6+ and Bootstrap 4. The Angular CLI makes it easy to maintain and modify this app template. With easy-to-understand code and a handy starter kit, this template works right out of the box. Apex includes multiple solid and gradient menu color options and sizes, with an organized folder structure and more than 500 components and 50 widgets.

This template is fully responsive, clean on every device and modern browser, and comes with AoT and lazy loading. Choose from a few pre-made layout options and customize with ready-to-use elements and popular UI components.

User jklayh says:

“These guys really know how to integrate everything well into one package, and the UI design is amazing. This is highly recommended.”

With three niche dashboards, Stack Admin can be used for any type of web app: project management, e-commerce back ends, analytics, or any custom admin panels. This template looks great on all devices, and it comes with a kit to help developers get started quickly.

User sietzekeuning says:

“Beautifully designed and has loads of very useful components. An absolute tip for anybody looking for a very well designed CMS!”

Clean, unique, and blazing fast, Fury is an admin template that offers you everything you need to get started with your next project. Built with Angular and Material Design, this template is the perfect framework for building large enterprise apps, and it allows for a modular component setup.

This template is designed to be lightweight and easy to customize. Features include completely customizable dashboard widgets and Angular Flex-Layout, to provide a fast and flexible way to create your layouts.

User CreativelyMe says:

"The code quality is exceptional. It's clearly the work of a true craftsman. This template is truly a joy to work with, and continues to evolve over time. Excellent!!!"

Able Pro 7.0 is a fully responsive admin template that provides a flexible solution for your project development. Built with a Bootstrap 4 framework, this template has a Material look, with well structured and commented code. This retina-ready template comes with more than 150 pages and infinite design possibilities—use the Live Customizer feature to do one-click checks on color combinations and layout variations.

With more than 100 external plugins included, advanced menu layout options, and ready-to-deploy dashboards and landing pages, Able Pro 7.0 will streamline your app development process to save you time and effort.

User macugi says:

“An amazing template. Very good design, good quality code and also very good customer support.”

Vuely is a fully responsive admin template designed to give you a hassle-free development experience. Carefully crafted to look beautiful on mobile and tablet devices with pre-designed custom pages and integrated features like charts, graphs, and data tables, this template allows you to create your back-end panel with ease. More than 200 UI elements and 78 custom widgets simplify your development process.

Vuely is translation ready with RTL support and comes with multiple color and theme options to give you the flexibility you need.

Looking for a full featured admin template for your Vue.js project? Look no further. This Vue.js admin template is completely modular, so you can modify layouts, colors, and other features without disturbing the rest of the code. Simply customize it with the provided Sass variables. This template is well documented, with seven layout and multiple color scheme options. With all the components you need, this Vue.js template will get you started on your next dashboard build.

User JimOQuinn says:

“Wow, the look and feel of the theme has progressed substantially since the initial release. Great job! Love the no-jQuery framework as I find VueJS much easier to work with. Looking forward to the next release. Keep up the good work!”

Multi-Framework Admin Templates

This Material Design admin template provides you high performance and clean and modern Vue, React and Angular versions. This super flexible template uses SCSS, Gulp, Webpack, NPM Modern Workflow, and Flexbox, and has all the components you need to create your front-end app project. With stunning layouts, over 500 components and lifetime updates and customer support, this is the most complete admin app available.

User themeuser55 says:

“This is an absolutely amazing theme. It is done with quality in mind, regularly updated, and you can tell the developer cares about his work and clients. I would suggest using this template if you are looking for a production worthy front end for a basis to any modern web application.”

Primer is a creative Material Design admin template, with ahead-of-time (AoT) compilation for a more performant user experience. Fully responsive and packaged with both Angular and React versions, this template has left-to-right and right-to-left (LTR/RTL) support and light and dark colour schemes. Well documented and easy to customize, with this app template you get everything you need to start working on your SaaS, CRM, CMS, or dashboard based project.

User cjackett says:

“This is a very well built template with great flexibility and lots of options. The author continues to update and improve the template, extending its functionality and incorporating new angular packages. Excellent work.”

Conclusion

This is just a sample of the many app admin templates available on ThemeForest. There is a template for you, no matter what your style or specifications. These templates will make coding the front end of your app easier and help you deliver an app that provides a high-quality user experience. All this will save you time and effort, letting you focus on the real details of coding your project.

]]>2018-10-30T13:41:58+00:00//www.4elements.com/blog/read/15-best-modern-javascript-admin-templates-for-react-angular-and-vue.js
https://www.4elements.com/blog/read/15-best-modern-javascript-admin-templates-for-react-angular-and-vue.js#When:19:20:20ZAre you building an app and looking for tools that can help you streamline your build? Take the effort out of your next front-end app build with one of these powerful admin templates.

Whether you prefer to work with React, Angular, or Vue.js, there are a range of templates available on Theme Forest that make it painless to create beautiful, interactive UIs. Built using cutting edge technology, these templates offer flexibility and dependability for your app build. Create a stunning UI easily by selecting from modular components and clean layouts so that you can focus on the business logic of your app build.

React Admin Templates

React is a JavaScript library for building user interfaces that has taken the web development world by storm. React is known for its blazing-fast performance and has spawned an ecosystem of thousands of related modules on NPM, including many tooling options. These admin templates and dashboards are a great starting point for your next React app.

Isomorphic is a React and Redux powered single page admin dashboard. It is based on a progressive web application pattern, and highly optimized for your next React app. With no need to install or configure tools like Webpack or Babel, you can get started building your app immediately. This template helps you write apps that behave consistently, run properly in different environments, and are easy to test. With Sass and CSS styling modules, multilingual support, a built-in Algolia search tool, Firestore CRUD, and easy-to-integrate code, you can use this template to build anything you want.

User justinr1234 says:

“Easily the most well-designed template using React out there, from both a code and design perspective. Integrating the code off the shelf was a breeze. If you have an existing app or are looking to roll a new one on the front end, this template successfully solves the problem for either use case. Excellent product!”

Are you building a single page app and interested in moving to React and Redux? Don’t start from scratch—build a scalable, highly-polished admin app with this React, Redux, Bootstrap, and Ant Design template that works well on mobile, tablet, and desktop. Clean UI React is create-react-app based, so getting started is simple. Modular code allows you to add and remove components with ease. Developer friendly and highly customizable, this template includes 9 example apps, more than 50 pages, multiple layout options with easy-to-update Sass or CSS styling, and ample reusable React components.

User hermanaryadinata says:

“The quality is incredibly high and the flexibility is limitless! Highly recommended to buy!”

Kick start your app project with Jumbo React, a complete admin template. This product includes two React templates, one based on Google Material Design, and the other on the stunning flat style. Each template comes with multiple design concepts and hundreds of UI components and widgets, as well as an internationalization feature that allows you to develop a multilingual app. Think of this template package as a starter kit to build your app. With it, you can develop a scalable React app rapidly and effectively and save yourself time and money in the process.

User Ace_Cooper says:

“Love the amount of components out-of-the-box. Right what I needed to jump start a new project.”

Looking for a template to get your React project started? Fuse is a complete admin template that follows Google’s Material Design guidelines, and will allow you to learn some of the advanced aspects of React while you build your app. This template uses Material UI as the primary UI library, and Redux for state management. It comes with built-in page templates, routing, and authorization features, along with five example apps, more than 20 pages, and lots of reusable React components.

User DevX101 says:

“Very well organized template ready for building a real app. Not just visual templates, but includes authorization and modular design. Great starter kit.”

Angular Admin Templates

Angular is more than just the next version of a popular front-end framework. Angular takes all the best parts of AngularJS and improves them. It's a powerful and feature-complete framework that you can use to build fast, professional web apps.

Check out these templates that you can use to get your next Angular app off on the right foot with clean code and great design.

This best-selling template is a 3-in-1 bundle, with Angular 7+, Bootstrap 4 and 21 layered PSD designs. Fuse is based on Google Material Design and comes with AoT compiler support, as well as a complete NgRx example app. This template includes configurable layouts, a skeleton project, built-in apps such as calendar, e-commerce, mail, and chat, and more than 20 pages to get you started. Fuse supports all modern browsers (Chrome, Firefox, Safari, Edge) and comes with Bootstrap 4, HTML, and CSS versions, along with the Angular app.

User haseeb90 says:

“This is a great theme. Comes with pre-built apps that you just need to plug your logic and back end into. The code quality is great and stays up-to-date with the latest Angular versions.”

Pages is the simplest and fastest way to build a web UI for your dashboard or app. This beautifully-designed UI framework comes with hundreds of customizable features, which means that you can style every layout to look exactly the way you want it to. Pages is built with a clean, intuitive, and fully responsive design that works on all major browsers and devices. Featuring developer friendly code, full SASS and RTL support, and five unique dashboard layouts, this Angular 5+ ready template boasts flawless design and a 5-star rating.

Clip-Two is an advanced, fully responsive admin template built with AngularJS. AngularJS, the original version of the popular Angular framework, lets you extend the HTML vocabulary. The resulting environment is expressive, readable, and quick to develop in. Using a Bootstrap UI, Clip-Two is mobile-friendly and comes with ready-to-customize themes with six different skins and infinite styles with SASS. This template includes features like 4 level sidebar menus, CSS3 page transitions, custom scrollbar for vertical scrollable content, dynamic pagination, and RTL functionality.

User hafizminhas says:

“This is one of the most outstanding Angular 1.x templates available in the market.”

Apex is a powerful and flexible admin template based on Angular 6+ and Bootstrap 4. The Angular CLI makes it easy to maintain and modify this app template. With easy-to-understand code and a handy starter kit, this template works right out of the box. Apex includes multiple solid and gradient menu color options and sizes, with organized folder structure and more than 500 components and 50 widgets. This template is fully responsive, clean on every device and modern browser, and comes with AoT and Lazy Loading. Choose from a few pre-made layout options and customize with ready-to-use elements and popular UI components.

User jklayh says:

“These guys really know how to integrate everything well into one package, and the UI design is amazing. This is highly recommended.”

Stack Admin is a Bootstrap 4 modern admin template with unlimited possibilities. This product includes 8 pre-built templates with organized folder structure, clean and commented code, and more than 1500 pages and 1000 components. Stack Admin provides RTL support, searchable navigation, unique menu layouts and advanced cards. With three niche dashboards, Stack Admin can be used for any type of web app: project management, e-commerce back ends, to analytics, or any custom admin panels. This template looks great on all devices, and comes with a kit to help developers get started quickly.

User sietzekeuning says:

“Beautifully designed and has loads of very useful components. An absolute tip for anybody looking for a very well designed CMS!”

Clean, unique and blazing fast, Fury is an admin template that offers you everything you need to get stared with your next project. Built with Angular and Material Design, this template is the perfect framework for building large enterprise apps, and allows for a modular component setup. This template is designed to be light weight and easy to customize. Features include completely customizable dashboard widgets and Angular Flex-Layout, to provide a fast and flexible way to create your layouts.

User CreativelyMe says:

"The code quality is exceptional. It's clearly the work of a true craftsman. This template is truly a joy to work with, and continues to evolve over time. Excellent!!!"

Able Pro 7.0 is a fully responsive admin template that provides a flexible solution for your project development. Built with a Bootstrap 4 framework, this template has a Material look, with well structured and commented code. This Retina-ready template comes with more than 150 pages and infinite design possibilities—use the Live Customizer feature to do one-click checks on color combinations and layout variations. With more than 100 external plugins included, advanced menu layout options, and ready-to-deploy dashboards and landing pages, Able Pro 7.0 will streamline your app development process to save you time and effort.

User macugi says:

“An amazing template. Very good design, good quality code and also very good customer support.”

Vuely is a fully responsive admin template designed to give you a hassle-free development experience. Carefully crafted to look beautiful on mobile and tablet devices with pre-designed custom pages and integrated features like chart, graphs, and data-tables, this template allows you to create your back-end panel with ease. More than 200 UI elements and 78 custom widgets simplify your development process. Vuely is translation ready with RTL support and comes with multiple color and theme options to give you the flexibility you need.

Looking for a full featured admin template for your VueJS project? Look no further. This VueJS admin template is completely modular, so you can modify layouts, colors, and other features without disturbing the rest of the code. Simply customize with provided Sass variables. This template is well documented, with seven layout and multiple color scheme options. With all the components you need, this VueJS template will get you started on your next dashboard build.

User JimOQuinn says:

“Wow, the look and feel of the theme has progressed substantially since the initial release. Great job! Love the no-jQuery framework as I find VueJS much easier to work with. Looking forward to the next release. Keep up the good work!”

Multi-Framework Admin Templates

This Material design admin template provides you high performance and clean and modern Vue, React and Angular versions. This super flexible template uses Scss, Gulp, Webpack, NPM Modern Workflow, and Flexbox, and has all the components you need to create your front end app project. With stunning layouts, over 500 components and lifetime updates and customer support, this is the most complete admin app available.

User themeuser55 says:

“This is an absolutely amazing theme. It is done with quality in mind, regularly updated, and you can tell the developer cares about his work and clients. I would suggest using this template if you are looking for a production worthy front end for a basis to any modern web application.”

Primer is a creative Material Design admin template, with ahead-of-time (AoT) compilation for a more performant user experience. Fully responsive and packaged with both Angular and React versions, this template has left-to-right and right-to-left (LTR/RTL) support and light and dark colour schemes. Well documented and easy to customize, with this app template you get everything you need to start working on your SAAS, CRM, CMS, or dashboard based project.

User cjackett says:

“This is a very well built template with great flexibility and lots of options. The author continues to update and improve the template, extending its functionality and incorporating new angular packages. Excellent work.”

Conclusion

This is just a sample of the many app admin templates available on Theme Forest. There is template for you, no matter what your style or specifications. These templates will make coding the front end of your app easier and help you deliver an app that provides a high quality user experience. All this will save you time and effort, and let you focus on the real details of coding your project.

]]>2018-10-29T19:20:20+00:00//www.4elements.com/blog/read/new-course-build-an-app-with-javascript-and-the-mean-stack
https://www.4elements.com/blog/read/new-course-build-an-app-with-javascript-and-the-mean-stack#When:10:39:02ZYou can make your web development work a whole lot easier by taking advantage of the MEAN stack (MongoDB, Express, Angular, and Node.js). Find out how in our comprehensive new course, Build an App From Scratch With JavaScript and the MEAN Stack.

What You’ll Learn

Full-stack web development requires coding both a front-end for the browser and a back-end server. Using JavaScript for both parts of the app makes life a lot simpler for full-stack devs. With the MEAN technologies, you can code a cutting-edge web app in JavaScript, from the front-end all the way down to the database.

In this detailed 3.5-hour course, Derek Jensen will show you how to use the MEAN technologies to build full-stack web apps using only JavaScript (and its close cousin TypeScript). You'll start from absolute scratch, scaffolding an empty project, and build up a complete web app using the MEAN stack.

You'll learn how to configure a MongoDB database, how to write a database abstraction layer, and how to create a REST API to make that data available to the front-end. On the client side, you'll learn how to structure an Angular app, how to create a service to connect with the back-end API, and how to implement each of the UI components that make a complete app.

Watch the Introduction

Take the Course

You can take our new course straight away with a subscription to Envato Elements. For a single low monthly fee, you get access not only to this course, but also to our growing library of over 1,000 video courses and industry-leading eBooks on Envato Tuts+.

Plus you now get unlimited downloads from the huge Envato Elements library of 700,000+ creative assets. Create with unique fonts, photos, graphics and templates, and deliver better projects faster.

]]>2018-10-18T10:39:02+00:00//www.4elements.com/blog/read/hands-on-with-aria-accessibility-recipes-for-web-apps
https://www.4elements.com/blog/read/hands-on-with-aria-accessibility-recipes-for-web-apps#When:12:31:40ZIn the confusing world of web applications, ARIA can help improve accessibility and ease of use for your creations. HTML isn't able to handle many types of relationship between elements on the page, but ARIA is ideal for almost any kind of setup you can come up with. Let’s take a look at what ARIA is, how it can apply to your web app, and some quick recipes you can use for your own sites.

Basics of ARIA

ARIA, also called WAI-ARIA, stands for the Web Accessibility Initiative–Accessible Rich Internet Applications. This initiative, updated by the W3C, aims to give developers a new set of schemas and attributes for making their creations more accessible. It specifically aims to cover the inherent gaps left by HTML. If you’re not familiar with what it does already, you should take a look at our primer on ARIA. You might also be interested in our pieces on ARIA for the Homepage and ARIA for eCommerce.

Briefly, though, ARIA has three main features that we'll be focusing on:

Creating relationships outside of the parent-child association: HTML only allows for relationships between parent and child elements, but the associations we want to define aren't always nested within each other. ARIA lets us define element relationships outside of this constraint.

Defining advanced controls and interactivity: HTML covers many basic UI elements, but there are many more advanced controls that are used around the web that are hard to define outside of their visual component. ARIA helps with that.

Providing access to "live" area update attributes: the aria-live attribute gives screen readers and other devices a listener for when content on the page changes. This allows for easier communication of when on-screen content changes.

ARIA and Web Applications

Before, we looked at adding ARIA to the common elements of eCommerce pages and site homepages. With web apps, however, each one differs drastically from the last. Forms and functions shift between each app, and often even between versions of the same app. Because of this, we’ll treat our implementations here like recipes in a cookbook rather than a wholesale conversion of a page.

When it comes to web apps, a user’s intent is more difficult to discern in a generalized sense. With eCommerce, no matter which site you are on, it is likely that the visitors are looking to purchase a product or service. Web apps serve a variety of purposes, so instead, we’ll focus on creating nuanced controls that are accessible and user friendly.

Let’s get into some of these control types.

Controlling Live Updates With Buttons

The first control we’re going to look at is a displayed value updated by a button press. These types of controls are commonly seen where an element is displaying a quantity that may be adjusted by buttons labelled ‘+’ and ‘-’, but can take many forms, such as arrow buttons that let you cycle through predefined statuses.

A standard implementation can leave some gaps in understanding for the user. It is unclear what elements the buttons affect, how they affect them, and when the element’s value changes.

Below, we’ll use ARIA to create a connection between the buttons and the value display element using the aria-controls attribute. Then, we’ll make the use of the buttons clear by using aria-label and HTML <label>. Finally, we’ll utilize the aria alert role and the aria-live attribute to let our user know when the value is being updated.

ARIA Popups and Hover Tooltips

When outfitting a site with ARIA, it is common to use "progressive accessibility". The idea behind this term is that taking a site or web app from its basic form to fully accessible is a daunting task. To deal with this in a way that still makes forward movement, you can implement new features progressively and iteratively.

For a tooltip with a related popup or modal, this means that we can break the problem into two steps, rolling each out as we can. In this case, the tooltip we’re talking about is the common image of a small question mark that opens additional information when hovered over.

To let users know that the question mark image is actually a tooltip, we’ve defined it by using an appropriate role, like this:

<img src="question-mark.jpg" role='tooltip' />

There are a few issues with this implementation, though. Users may still not be aware that hovering over the tooltip initiates a popup with further information. Here’s how we can add that to our code:

Accessible Input Tooltips

Instead of a hover-based tooltip, it’s also common for a web app to utilize forms where each input has its own associated tooltip.

Without additional ARIA markup, it can be difficult to tell which tooltips apply to which input for a user. Not having this relation in place can render your helper text useless in some cases.

To correct for this, we’ll wrap our tooltips within their own elements. Each of these can be nested near their related input, have their relations established with ARIA, and then can be triggered with JavaScript (or just CSS if you’re crafty).

Status Alerts

“Our service is currently down”, “Your account is suspended”, and related status alerts are commonly used among web apps, and display important information for users. Without ARIA, they can get buried within the information on a page and cause a variety of issues.

Utilizing the ARIA alert role and the aria-live attribute, we can make sure that our users are aware of any issues quickly once they arrive on a page.

Creating a Toolbar

Finally, let’s take a look at another common control element used within web apps: the toolbar. For our purposes, we’re going to be marking up a toolbar that works like this: our web app shows a large amount of data, oriented in a table. Above this table, our toolbar has several buttons that allow users to sort the table in various ways. These buttons include classic sort options such as A to Z and Z to A.

Relationally, these leave some problems concerning accessibility. First, it isn’t clear that those buttons affect the table—we’ll solve this using the aria-controls attribute. It also isn’t clear that the buttons are associated with each other, which may be a useful piece of information for our users. To define this, we’ll be using the toolbar role. Finally, a user doesn’t necessarily know which button was pressed last. To correct this, we’ll use the aria-pressed attribute.

When using the aria-pressed attribute, it's important to note that you’ll have to update these elements as the user interacts with them. This will likely require changing the attributes through JavaScript or jQuery.

Adding ARIA to Your Own Web Apps

With this handful of new control schemes and relations under your belt, you’re well on your way to making your own web app fully accessible! After you’ve added these new markups in, think about how you could apply these attributes to other parts of your user interface to maximize the usability of your creation.

Are there attributes, roles, or other features of ARIA that you’d like to know about? Or maybe you have some questions about your own implementations, or corrections for this article? Get in contact using the comment section below, or by tagging kylejspeaker on Twitter!

]]>2018-10-12T12:31:40+00:00//www.4elements.com/blog/read/10-best-wordpress-facebook-widgets
https://www.4elements.com/blog/read/10-best-wordpress-facebook-widgets#When:10:17:34ZFacebook has over 2.23 billion active users worldwide who spend an average of 20 minutes per visit. That explains why, “on average, the Like and Share Buttons are viewed across almost 10 million websites daily” (via Zephoria).

10. Facebook Social Plugins

Facebook does have a free plugin that's available in the WordPress Plugin Directory. However, while it has many options, it has not been updated in over two years and its ratings are dismal.

I suggest you avoid it.

However, Facebook does provide everything you need to create your own Facebook widget.

You can create:

embedded comments

share buttons

follow buttons

Like buttons

and more

Follow the online prompts and step-by-step instructions to configure your code. Dropping your snippet into the WordPress Text Widget may be all that's required for you to successfully set up your Facebook social plugin.

Build Your Own WordPress Facebook Widget

While most people will find the previously mentioned plugins and solutions useful, there are those who may find building their own WordPress Facebook widget the way to go.

Conclusion

There are many other Facebook WordPress plugins on CodeCanyon if you still haven't found exactly what you're looking for. Of course, if you still can't find what you're looking for, perhaps it's time to think about building your own and becoming an Envato author?

It's just a matter of comparing prices, options, and reading a few user reviews before finding the right Facebook widget. Facebook is a powerful social media platform that should not be overlooked. Integrating it into your website can make a big difference.

What kind of Facebook widget are you looking for?

]]>2018-10-04T10:17:34+00:00//www.4elements.com/blog/read/hands-on-with-aria-accessibility-for-ecommerce
https://www.4elements.com/blog/read/hands-on-with-aria-accessibility-for-ecommerce#When:00:09:18ZLooking to make your site more accessible? Or maybe you want to make it easier to traverse your site overall using browsers and other interfaces? Using ARIA, you can do both. Let’s take a look at what ARIA is and how it can benefit an eCommerce site. We'll also go through some examples step by step.

What Is ARIA?

WAI-ARIA stands for the Web Accessibility Initiative–Accessible Rich Internet Applications. This initiative takes the form of a set of guidelines and attributes that are maintained by the W3C. Using these attributes, we can create relations between our site elements that can’t be expressed through HTML alone. The most important for our use here is that we can define element relations outside of the parent-child relationship, and more clearly connect UI elements for our users.

Adding ARIA to eCommerce

Previously, we talked about how to apply ARIA to a general website that resembled a common small business homepage. This time, we’re going to take a closer look at how ARIA can improve the user experience for large eCommerce sites.

We’re going to focus on four key areas of eCommerce that pose unique situations: product pages, category pages (or product aggregate pages), multi-level navigation, and faceted navigation. We’ll be using these two wireframes to guide us through the process:

A very basic product mockupExample of a Product Listing Page mockup

Preparing for ARIA

In the case of most websites, adding ARIA is a fairly straightforward process. You define the pieces of your site, break them down into landmarks and elements, and add in the necessary code.

We’re going to follow a similar process with our eCommerce site, but we now have a new layer of intricacy. With the complexity that comes with eCommerce sites, ARIA can become a rabbit hole in many cases. While it is important to improve the accessibility of your site as much as possible, we unfortunately will often run into business constraints. Because of this, we’ll want to do a little more planning upfront, prioritizing each of our ARIA additions.

By doing this prioritization, we can ensure that the most important aspects of our site are improved first, making the user experience as good as we can in the time available.

Let’s kick it off by taking a look at some product pages.

ARIA for Product Pages

A staple page for any eCommerce site, these pages typically show a product, its available variations, and a way to add the item to a cart. Each of these interactive elements should be considered separately.

If we needed to prioritize the implementation on this page, we would want to group it like so:

Core product info, interactive elements, add to cart button

Expanded product content

The main factor at play here is something we talked about in a previous article: ARIA helps to define an element’s intent. In the case of the expanded content, most of the HTML elements that are being used have elements with semantic meaning and intent that match. This means that while it is useful to put additional ARIA information if we can, it is likely less important than completing the other three areas.

Core Product Information

Let’s start off by adding ARIA to our core product information. This is pretty straightforward due to the simplicity of the elements being used here. The code looks like this:

Interactive Product Elements

This is where product pages can get a little tricky. Products on an eCommerce site can have quite a few different types of variations present. Beyond just the types available, the number of them that can be utilized simultaneously adds another layer of complexity. In our example, we have three elements that come into play: size, color, and quantity.

Let’s take a look at how you can mark that up. Here's the code for the ARIA-enhanced selection and checkbox elements:

Add to Cart Button

The cart button is similar to a standard button, but we’re going to go out of our way to label it more clearly than other buttons:

<button aria-label="Add to Cart">Add to Cart</button>

Expanded Product Content

Finally, the expanded content area is treated just like a typical content area. Depending on your product pages, however, it might be a good idea to separate your main content landmarks from your supplementary content landmarks. The tabs add an extra layer to the code here as well. Here’s how we’d do it in our example:

Adding ARIA to Category Pages

While product pages can be considered an alternative form of content page in most respects, a site’s category pages, also called Product Listing Pages (PLPs), are a whole different beast. They are operating as a large navigation structure, allowing users to sort through hundreds or even thousands of products.

This makes them increasingly complex, becoming even more so as additional layers of content and filters are added (we’ll talk about faceted navigation and filters in the next section). Let’s look at the two mains areas of our PLP outside of the filters: the product blocks and the pagination.

Handling Pagination

Pagination is the name given to the small links at the bottom of our product listings here. Typically, they’re represented by numbers or arrows, but they can come in various other forms. On the HTML side of things, pagination links look just like regular links. We’ll say that ours is controlling the product listings without redirecting to another page.

To make it known that it's controlling a content area in this way, we have to declare it as a controller, define what it is controlling, and then mark that content area as live. Here’s what that looks like in our case:

When we create our live area here, we utilize the "polite" setting that ARIA makes available. If your changes are pertinent and need to be addressed by the user quickly, or you need to prioritize among several live areas, you can use the value "assertive" as well.

Marking Up Repetitive Elements

A unique challenge that comes up with product landing pages is the intensive navigation complexity within the product listings themselves. From a visual perspective, it can be easy enough to group the information, using visual cues to determine what information applies to which product.

Doing so with ARIA has a few more layers than the previous applications we’ve covered. Marking a “buy now” button a standard button can create confusion when there are 20 of these buttons on a page. To solve this, we’ll need to create clear connections between each product and its related elements.

While this does help a bit with clarifying relations for the user, it's still not the best implementation. A better way would be to dynamically generate an aria-label by concatenating the product-title element with an additional phrase such as "add to cart".

Using ARIA With Faceted Navigation

Faceted navigation refers to the filters and options that are commonly shown on eCommerce sites, letting you narrow down your search results. These come in many flavors—from sizes to color and beyond. For our example, we’re going to make two assumptions:

Our faceted navigation updates the products live on the page. This isn’t always the case, as sometimes eCommerce sites might generate a new page when a filter is applied, but we’ll be working as if the site updates content live.

Our faceted navigation allows for the selection of multiple filters. Not every eCommerce site does this, and there are definitely cases where it shouldn’t be allowed. However, this creates an extra layer of complexity outside of the scope of this article.

Setting Up Your Controls

The HTML behind our filters is similar to that of pagination, appearing in the code as basic links. For our uses, though, the intent of the filters is to alter information that is currently on the page. Because of this, we’ll want to mark the entire container around the filters, making it clear that this is a controller for another area on the page:

Defining Live Areas

Like pagination, these updates are happening live on the page. Because of this, we’ll want to mark the main content on our page as being “live”. Note that we did this previously in the pagination section, but we’ll be repeating the step here for consistency.

Making eCommerce More Accessible

With these new additions to your ARIA toolset, you should now be able to appropriately mark up almost any eCommerce site. To ensure the best user experience with an eCommerce site, remember to keep your navigation as simple as possible, and express intent clearly.

Have further questions on this topic? Did I miss something important? Tell me in the comments below!

]]>2018-09-29T00:09:18+00:00//www.4elements.com/blog/read/new_ebooks_available_for_subscribers
https://www.4elements.com/blog/read/new_ebooks_available_for_subscribers#When:10:56:52ZDo you want to learn more about big data analytics? How about creating microservices with Kotlin, or learning Node.js development? Our latest batch of eBooks will teach you all you need to know about these topics and more.

What You’ll Learn

This month, we’ve made eight new eBooks available for Envato Tuts+ subscribers to download. Here’s a summary of what you can learn from them.

Go is becoming more and more popular as a language for security experts. Security With Go is the first Golang security book, and it is useful for both blue team and red team applications. With this book, you will learn how to write secure software, monitor your systems, secure your data, attack systems, and extract information.

Rust is an open source, safe, concurrent, practical language created by Mozilla. It runs blazingly fast, prevents segfaults, and guarantees safety. This book gets you started with essential software development by guiding you through the different aspects of Rust programming. With this approach, you can bridge the gap between learning and implementing immediately.

With the help of this guide, you will be able to bridge the gap between the theoretical world of technology with the practical ground reality of building corporate Big Data and data science platforms. You will get hands-on exposure to Hadoop and Spark, build machine learning dashboards using R and R Shiny, create web-based apps using NoSQL databases such as MongoDB, and even learn how to write R code for neural networks.

By the end of the book, you will have a very clear and concrete understanding of what Big Data analytics means, how it drives revenues for organizations, and how you can develop your own Big Data analytics solution using the different tools and methods articulated in this book.

You want to build iOS applications but where do you start? Forget sifting through tutorials and blog posts—this book is a direct route into iOS development, taking you through the basics and showing you how to put the principles into practice. So take advantage of this developer-friendly guide and start building applications that may just take the App Store by storm!

Learning Node.js Development is a practical, project-based book that provides you with everything you need to get started as a Node.js developer. If you are looking to create real-world Node applications, or you want to switch careers or launch a side project to generate some extra income, then you're in the right place. This book has been written around a single goal—turning you into a professional Node developer capable of developing, testing, and deploying real-world production applications.

Microservices help you design scalable, easy-to-maintain web applications; Kotlin allows you to take advantage of modern idioms to simplify your development and create high-quality services.

This book guides you in designing and implementing services and producing production-ready, testable, lean code that's shorter and simpler than a traditional Java implementation. Reap the benefits of using the reactive paradigm and take advantage of non-blocking techniques to take your services to the next level in terms of industry standards.

Do you want to build applications that are high-performing and fast? Are you looking for complete solutions to implement complex data structures and algorithms in a practical way? If either of these questions rings a bell, then this book is for you!

You'll start by building stacks and understanding performance and memory implications. You will learn how to pick the right type of queue for the application. You will then use sets, maps, trees, and graphs to simplify complex applications. You will learn to implement different types of sorting algorithm before gradually calculating and analyzing space and time complexity. Finally, you'll increase the performance of your application using micro optimizations and memory management.

By the end of the book you will have gained the skills and expertise necessary to create and employ various data structures in a way that is demanded by your project or use case.

If you are a Python developer and want to efficiently create RESTful web services with Django for your apps, then this is the right book for you. The book starts off by showing you how to install and configure the environment, required software, and tools to create RESTful web services with Django and the Django REST framework. You then move on to working with advanced serialization and migrations to interact with SQLite and non-SQL data sources, creating API views to process diverse HTTP requests on objects, include security and permissions, and much more.

By the end of the book, you will be able to build RESTful web services with Django.

Start Reading With a Combined Subscription

You can read our new eBooks straight away with a subscription to Envato Elements. For a single low monthly fee, you get access not only to these eBooks, but also to our growing library of over 1,000 video courses on Envato Tuts+.

Plus you now get unlimited downloads from the huge Envato Elements library of 680,000+ creative assets. Create with unique fonts, photos, graphics and templates, and deliver better projects faster.

]]>2018-09-19T10:56:52+00:00//www.4elements.com/blog/read/hands-on-with-aria-homepage-elements-and-standard-navigation
https://www.4elements.com/blog/read/hands-on-with-aria-homepage-elements-and-standard-navigation#When:12:39:51ZLooking to make your website more accessible? Want to be the first in line as new online interfaces come to market? Look no further than ARIA.

This set of standards maintained by the W3C (World Wide Web Consortium) gives you the best of both worlds by adding a number of attributes that allow HTML to be extended in meaningful ways. Here, we’ll walk through what ARIA is, see how it can benefit an informational website, and go through a use case step by step with code examples. Let’s get started!

ARIA Basics

ARIA (or sometimes WAI-ARIA) is the acronym for a set of accessibility standards, called the Web Accessibility Initiative–Accessible Rich Internet Applications. You can check out more about the foundations of ARIA in my previous article, but let’s go over some of its pillars now.

Defining Non-Traditional Relationships

A majority of websites are built using HTML, which primarily relates elements to each other in a hierarchical fashion through parent-child relationships. This structure is great for defining content, but falls short when it comes to defining user interfaces. For example, in many sites and web applications, an area of content is controlled by buttons within a sibling element—the siblings have the same parent element, but in HTML they don't have a direct relationship with each other. Because of this, it becomes difficult to define which User Interface (UI) elements control which pieces of content when using assistive technologies.

This carries through to newer interfaces as well. For example, if you are trying to navigate a website through a smart device, it becomes difficult when element changes are not visible.

ARIA allows you to tie HTML elements together using additional attributes to represent these types of controls.

Non-Rigid Element Classification

Another shortcoming of HTML is its inability to separate structure from intent.

For example, you may want to make an image element into a clickable button. However, HTML still largely defines that image as only an image, and everything beyond that happens elsewhere.

With ARIA, intent can be separated out from an element, allowing for images to be marked as buttons or a link to be defined as a tooltip. This gives more control to the developer concerning the UI, creating more clearly set relationships.

Creating Landmark Areas

Beyond marking elements within the UI, ARIA also gives access to the role attribute—used to define areas of a page. For example, you can mark your main menu as navigation and your article’s content area as main content. This makes it easier for users to move throughout the important areas of your site, and can prevent confusion for those with uncommon or complex site layouts.

Use Case: Small Business Homepage

To get some experience adding ARIA to a site, we’re going to take a wireframe of a site that might be used by a small business and implement our attributes step by step.

The page wireframe we'll be marking up

For the sake of clarity, the code we’ll be working with is stripped down, with CSS classes and any functionality from a CMS removed.

The first thing we’ll want to do is break up our wireframe into parts to make adding in ARIA simpler overall. In the picture below, you’ll see that I’ve chosen to break the site down into five main parts:

navigation

content

sidebar

contact forms

specialized UI elements

In our case, it looks like this:

The sections we'll be working with

When breaking your site down into areas like this, we’re looking for two things. The first is for major elements that can be defined by an ARIA landmark: banner, navigation, main, complementary, content info, search, and form. These represent the necessary parts of our site, and anything unnecessary for utilizing it won’t be marked as a landmark (e.g. advertisements).

The second thing to look for is specific elements that need to be clarified with ARIA. In most cases, this is pretty simple (such as marking an image as an image), but for some UI elements, it can get a bit tricky.

Once we know what areas need to have ARIA implemented, we can start to move through them systematically. Let’s get started with the site’s navigation.

Navigation

In our example, you’ll notice that we have a few types of navigation. The first is a menu as seen on most sites, listing some pages for the site. Directly below is a smaller menu that holds options for users.

We want to mark these with the role="navigation" attribute so that they can easily be picked out as the site’s menus. This leads to the question: should they be grouped together into a single navigation landmark, or marked as two separate landmarks?

To answer this question in your own projects, you can typically ask yourself two questions:

Is the intent for these menus different? In our example, the top menu navigates the site’s pillar pages, while the smaller menu focuses on things that a logged-in user might need. These intents are different, so it makes sense to separate them.

Are the menus within the same parent element? I know this seems counterintuitive since ARIA is designed to help us overcome these types of relationship restrictions, but in this case it is less about what is possible and more about what is right for the user. Having a single menu defined, but with half of it in one location and the other half elsewhere, makes navigation more difficult.

For our case, we are going to treat our navigations as two separate landmarks. So we’ll make some changes to the code. To start with, we have just our basic HTML:

The next step in defining these landmarks is to give the user a hint as to what the intent of each menu is. If we leave them both as navigation without any further information, it just makes things more difficult to interpret. So let’s add meaningful labels to them using the aria-label attribute:

Within that content, we’ll go on to find any element that has an intent that doesn’t match its HTML definition.

First, we’ll take care of the image acting as a button by adding the "button" role:

<img src=”#” onclick=”#” role=”button” />

This link that activates a modal is a bit trickier, because it depends on what is in the modal itself. For us, we’re going to say it’s a tooltip:

<a href=”#” onhover=”#” role:’tooltip’>scelerisque</a>

Within our main content, we also have a search form. This has an extra layer of complexity to it, in that it’s a search form using HTML elements, and it also qualifies as a search box landmark. We would mark it up like this:

Beyond that, you can define every element with its proper ARIA tag. For most sites, this can be too much of a time burden on the development process, though in most CMSs it can be automated. In cases where it can’t be, if an element’s HTML definition matches its use intent, then it can be considered low priority when making ARIA implementations. Here’s what the main content area looks like after making all these changes:

To define the content, we’ll want to give it the "complementary" role, letting users know that the information in the sidebar is additional content related to the main content. That can look like this:

The related posts below could be considered a form of navigation, allowing users to further explore the posts of the site. We’ll want to mark it with a "navigation" role, and give it an appropriate label, like so:

Each site’s sidebar is different and may require a different combination of roles and landmarks. If your sidebar has an advertisement, then it’s best not to mark that element. If there’s a search form within your sidebar, then mark it with the appropriate role as well. Any menus that appear in a sidebar should follow the same pattern as we discussed in the navigation section:

Handling Contact Forms

Finally, at the bottom of our page is a call-to-action form, asking for the user’s name and email, with a standard submit button below. When it comes to forms, there are three parts to keep in mind:

Give the form the landmark role of "form": since the form is a major part of the site, we need to make it easy for users to get to it. We do so by giving it a landmark role

Assign matching roles to elements. Forms are a common area for intent and HTML definitions to be mismatched. Add in ARIA roles where necessary, especially when it comes to checkboxes, sliders, tooltips, and other elements that can be implemented in multiple ways.

Match the labels with the appropriate elements. HTML handles this in a basic way, letting you use the <label> element to associate a label with an input. Forms can easily have a more complex structure that prevents that from working; fortunately we can fix that with the aria-labelledby attribute.

Making the Web More Accessible

Our site wireframe now has ARIA! While there is still a lot of ARIA left to explore, you now have enough knowledge to make a large portion of the sites you work on more accessible. Beyond that, your site is also better prepared to handle any number of new internet-traversing technologies that might arise.

Is there another aspect of ARIA that you’d like us to explore? Have questions about this article? Feel free to add them in the comments below!

]]>2018-09-18T12:39:51+00:00//www.4elements.com/blog/read/20-best-wordpress-calendar-plugins-and-widgets
https://www.4elements.com/blog/read/20-best-wordpress-calendar-plugins-and-widgets#When:10:18:06ZWhether you need an event calendar plugin, a booking system with payments, or a Google calendar widget, this list of plugins will have something to improve your app or site. Take a look at these 20 plugins available for download on Envato Market to see what WordPress is capable of.

WordPress calendar plugins encompass plugins for events, bookings, and appointments. Some transform WordPress into its own private workplace app, while others let you set up a fully functioning scheduling and payment gateway.

Take a look at these 20 WordPress calendar plugins available on Envato Market and you'll see what I mean.

There are certain features that you expect with a WordPress calendar plugin; however, there are a few features that really set a plugin apart. Take a look at WordPress Pro Event Calendar and you'll see a number of advanced features that really make it stand out from the pack.

Well designed and fully responsive, this plugin has some really great features:

WPML support

flexible event settings

Google Maps integration

subscribe to a calendar

custom fields and date range support

and more

But what really sets it apart is the ability to import events from ICS feeds and Facebook.

A timetable calendar is a great option for situations in which there are several events going on the same day.

Timetable is a plugin with a nice TV guide styled UI and presentation.

Built on jQuery and CSS3, this plugin lets you add events and modify your timetable from within its modern admin screens.

One of the most powerful features of Timetable is the ability to import and export timetables via CSV. Now you can edit and make changes in Excel or another spreadsheet app and import your changes and added events.

You can also select which program to print, for users who prefer their schedules on paper.

If you're already using Gravity Forms and need to integrate some appointment booking, this solution is certainly worth a look.

Features include:

supports paid and non-paid booking

accepts any payment gateway

many options for service intervals and slots

and more

Combined with Gravity Forms, gAppointments is a powerful plugin for booking appointments.

Conclusion

WordPress has come a long way since that first default calendar widget. You can see by this list that WordPress has evolved into a web-based tool that can be used day in and day out for all kinds of organizations.

You can dig through Envato Market for more WordPress calendar plugins—and of course, if you can't find exactly what you're looking for, you could always code your own!

]]>2018-09-18T10:18:06+00:00//www.4elements.com/blog/read/20-best-jquery-image-sliders
https://www.4elements.com/blog/read/20-best-jquery-image-sliders#When:10:54:54ZLet's be honest—sliders are fun. A little bit of movement can really bring a page to life.

Sliders—also known as "carousels" or "image sliders"—are interactive elements for showing images or other media in a web page.

Take a look at these 20 useful jQuery sliders from Envato Market and you'll see there's more to sliders than you might have imagined.

You can do this with multiple slides, or even just one slide, adding some compelling parallax animation to your website.

This comes with four different types of sliders, all offering the same animated, parallax effect.

Like many other jQuery sliders, it also includes:

fully customizable

touchscreen support

fully responsive and unlimited layers

autoplay, loop, height & width, and timer parameters

and more

Animated layers are not limited to text and images either. You can also include YouTube, Vimeo, and HTML5 video.

The Parallax Slider is another fine example of how Flash-like effects can be executed better than Flash—and be supported across all devices.

Conclusion

It's interesting to see how the jQuery slider has evolved from something that just transitioned from one image to the next, into a broad range of creative tools. We see sliders that are 3D, parallax, full page, and are fully responsive and viewable on the desktop or smartphone.

If you didn't find a jQuery slider you liked in this list, you could always take an Envato Tuts+ jQuery Code Tutorial and develop something completely new and unique. Otherwise, dig through the many other great jQuery sliders on Envato Market—there are plenty to choose from.

Using standard HTML alone, modern web apps can lock out users with accessibility needs.

HTML is the predominant markup language online, being used by nearly 83% of existing websites. While there have been some changes in the 25 years since its creation, even newer iterations, such as HTML5 and AMP, leave a lot to be desired—especially when it comes to accessibility. That's where ARIA comes in. In this tutorial, I'm going to talk about what ARIA is, why it’s useful for your site, and a couple of ways it can be added to your site.

What Is ARIA?

ARIA, also known as WAI-ARIA, stands for The Web Accessibility Initiative's Accessible Rich Internet Applications. The full specifications document can be found here. Note that the full spec document is fairly dense, so it might be a good idea to start by reading this post and checking out some of the other resources I link below.

The main purpose of ARIA is to allow for advanced semantic structure within HTML as a counterpart to HTML's syntax-heavy nature. In other words, HTML tells the browser where things go, and ARIA tells it how they interact.

Who Is Responsible for ARIA?

ARIA is a project hosted by the W3C (World Wide Web Consortium). The project adheres to the same standards for updating and editing as their other initiatives. They also provide a GitHub repository of several tests you can run to make sure your page is running properly.

What’s Wrong With My Current Site Markup?

Most sites that have a structured, well-thought-out design do well enough when it comes to adaptive technologies (i.e. screen readers). However, having a user being able to figure out how to use your site and having it be easy to use are different things. Low-vision users make up almost 2% of the population, and for them, the difference can mean saving a significant amount of time and detective work when trying to perform basic online tasks. It can be the difference between offering visitors a spectacular experience and providing a maze for them to navigate.

Beyond traditional means of accessibility, ARIA is finding its way into technologies that provide new takes on standard interaction. An increasing number of voice systems, aggregated browsing (like car-embedded computers, for example), and other innovations are putting ARIA to use, taking advantage of its increased semantic capabilities.

Okay, But What Does It Do?

Overall, ARIA connects elements together in a semantically meaningful way. It provides the user with additional meaning regarding interaction. Here are some real-world examples of how it might be used:

Associating non-nested elements: With plain HTML, the user's browser can only see relations based upon parent/child relationships. In some situations, however, we may want a series of buttons parallel to the content in the HTML hierarchy. With ARIA, we can define what type of controller each button is, and what element it controls elsewhere in the document.

Declare interactive elements: While HTML provides a handful of elements for interactivity, ARIA defines dozens more, allowing for more granular descriptions of what each element of your page can do. In addition, these can be assigned to HTML tags that wouldn't be commonly used for such a purpose but might make sense in your specific case. For example, you might use the <li> tag for a series of checkboxes that you prefer not to be composed of form elements.

Live area update notifications: Another feature that ARIA provides is the definition of a "live" content area. When a content area is defined in this way, ARIA will notify the user whenever content within that element changes. This can be useful when making sure low-vision users know that something has changed on your page.

Adding ARIA to Your Web Pages

We've talked about what ARIA can do, so now let's look at some of the most common examples. We'll start each section with a brief statement of the goal we are looking to accomplish, followed by a code sample of how to accomplish it.

Creating Alternative Labeling With ARIA

When it comes to alternative labeling, most developers are familiar with the alt attribute commonly used on <img> tags. This tag serves an important purpose: describing the image it is attached to for increased accessibility (or as a common SEO tactic, depending on your perspective).

ARIA provides a similar attribute called aria-label that can be attached to any HTML element, improving accessibility for not only images, but site sections, form controls, and more. Here's an example of what that looks like:

Defining ARIA-Specific UI Elements

HTML already provides a number of elements for the creation of web pages, but their main focus is typically to define an area generically and present the user with the site's structure. ARIA provides a few dozen additional elements that focus more heavily on how an element is used, such as a timer, tooltip, or progress bar.

An example use here is a tooltip that you might find on a form. There are a number of ways to create one, ranging from a link that triggers some JavaScript to an element that creates a modal when hovered over. The missing piece here is that despite how it might work for sighted users, low-vision users might not ever even know that the tooltip exists.

Available ARIA Definitions

To expand on these UI elements, here's a brief list of some of the most interesting "roles" that can be defined. The full listing is available in the referenced specification document.

search

banner

presentation

toolbar

status

menuitem

log

dialog

link

Establishing Relationships Outside of the Parent/Child Structure

Now let's expand on a point that we talked about earlier: the forced structure of HTML. While the parent/child relationship is good for deciding how things should be ordered, it falls short when more meaningful connections are needed. An example of this is sibling elements. Some libraries have added the ability for siblings or other forms of element relationships to be traversed, but this typically happens in JavaScript or another language outside of the markup.

ARIA gives us the ability to define these relations right in the markup, making it easier to group menu items, create non-standard navigation, and attach controls to element areas that would difficult to do normally.

Let's take a look at how we might use this to connect some controls to a content area:

This snippet says that the nextbutton.jpg image is a button, which is a control for the tutorial div below.

Creating "Live" Elements in ARIA

The last feature of ARIA that we'll cover here is the aria-live attribute. While most of the other features of ARIA here deal with semantic connections, this one deals directly with the idea of giving users notifications of content or element changes.

For many with low vision, it might not be immediately clear that their interaction with your site caused a change elsewhere on the page. This can be especially true for subtle changes, such as small blurbs of text that may change but remain relatively the same length. By using this attribute, every time the content is changed within the defined element, your user will be notified.

Making the Web a Better Place for All Users

With a little over 2% of the US population carrying some form of low-vision label, improving the accessibility of your site can increase your site's reach dramatically. For those with sites reaching multiple countries, that number becomes even larger. On top of accessibility, ARIA also provides a way for non-browser interfaces to utilize your site, with a number of voice-based devices already providing support.

Implementing ARIA helps your users and can help your traffic, so get to it!

Did I miss any details, or do you have additional questions? Leave a comment below!

If you want to dive into the full ARIA documentation or try the official testing tool, check out the links below:

]]>2018-09-04T19:37:20+00:00//www.4elements.com/blog/read/how-secure-are-your-javascript-open-source-dependencies
https://www.4elements.com/blog/read/how-secure-are-your-javascript-open-source-dependencies#When:14:00:00ZModern-day JavaScript developers love npm. GitHub and the npm registry are a developer’s first choice place for finding a particular package. Open-source modules add to the productivity and efficiency by providing developers with a host of functionalities that you can reuse in your project. It is fair to say that if it were not for these open-source packages, most of the frameworks today would not exist in their current form.

A full-fledged enterprise-level application, for instance, might rely on hundreds if not thousands of packages. The usual dependencies include direct dependencies, development dependencies, bundled dependencies, production dependencies, and optional dependencies. That’s great because everyone’s getting the best out of the open-source ecosystem.

However, one of the factors that get overlooked is the amount of risk involved. Although these third-party modules are particularly useful in their domain, they also introduce some security risks into your application.

Are Open-Source Libraries Vulnerable?

OSS dependencies are indeed vulnerable to exploits and compromises. Let's have a look at a few examples:

A vulnerability was discovered recently in a package called eslint-scope which is a dependency of several popular JavaScript packages such as babel-eslint and webpack. The account of the package maintainer was compromised, and the hackers added some malicious code into it. Fortunately, someone found out the exploit soon enough that the damage was reportedly limited to a few users.

Moment.js, which is one of the most-used libraries for parsing and displaying dates in JavaScript, was recently found to have a vulnerability with a severity score of 7.5. The exploit made it vulnerable to ReDoS attacks. Patches were quickly released, and they were able to fix the issue rather quickly.

But that's not all. A lot of new exploits get unearthed every week. Some of them get disclosed to the public, but others make headlines only after a serious breach.

So how do we mitigate these risks? In this article, I'll explain some of the industry-standard best practices that you can use to secure your open-source dependencies.

1. Keep Track of Your Application’s Dependencies

Logically speaking, as the number of dependencies increase, the risk of ending up with a vulnerable package can also increase. This holds true equally for direct and indirect dependencies. Although there’s no reason that you should stop using open-source packages, it’s always a good idea to keep track of them.

These dependencies are easily discoverable and can be as simple as running npm ls in the root directory of your application. You can use the –prod argument which displays all production dependencies and the –long argument for a summary of each package description.

Furthermore, you can use a service to automate the dependency management process that offers real-time monitoring and automatic update testing for your dependencies. Some of the familiar tools include GreenKeeper, Libraries.io, etc. These tools collate a list of the dependencies that you are currently using and track relevant information regarding them.

2. Get Rid of Packages That You Do Not Need

With the passage of time and changes in your code, it is likely that you'll stop using some packages altogether and instead add in new ones. However, developers tend not to remove old packages as they go along.

Over time, your project might accumulate a lot of unused dependencies. Although this is not a direct security risk, these dependencies almost certainly add to your project’s attack surface and lead to unnecessary clutter in the code. An attacker may be able to find a loophole by loading an old but installed package that has a higher vulnerability quotient, thereby increasing the potential damage it can cause.

How do you check for such unused dependencies? You can do this with the help of the depcheck tool. Depcheck scans your entire code for requires and import commands. It then correlates these commands with either installed packages or those mentioned in your package.json and provides you with a report. The command can also be modified using different command flags, thereby making it simpler to automate the checking of unused dependencies.

Install depcheck with:

npm install -g depcheck

3. Find and Fix Crucial Security Vulnerabilities

Almost all of the points discussed above are primarily concerned with the potential problems that you might encounter. But what about the dependencies that you’re using right now?

Based on a recent study, almost 15% of current packages include a known vulnerability, either in the components or dependencies. However, the good news is that there are many tools that you can use to analyze your code and find open-source security risks within your project.

The most convenient tool is npm’s npm audit. Audit is a script that was released with npm’s version 6. Node Security Platform initially developed npm audit, and npm registry later acquired it. If you’re curious to know what npm audit is all about, here’s a quote from the official blog:

A security audit is an assessment of package dependencies for security vulnerabilities. Security audits help you protect your package's users by enabling you to find and fix known vulnerabilities in dependencies. The npm audit command submits a description of the dependencies configured in your package to your default registry and asks for a report of known vulnerabilities.

The report generated usually comprises of the following details: the affected package name, vulnerability severity and description, path, and other information, and, if available, commands to apply patches to resolve vulnerabilities. You can even get the audit report in JSON by running npm audit --json.

Apart from that, npm also offers assistance on how to act based on the report. You can use npm audit fix to fix issues that have already been found. These fixes are commonly accomplished using guided upgrades or via open-source patches.

4. Replace Expired Libraries With In-House Alternatives

The concept of open-source security is heavily reliant on the number of eyes that are watching over that particular library. Packages that are actively used are more closely watched. Therefore, there is a higher chance that the developer might have addressed all the known security issues in that particular package.

Let’s take an example. On GitHub, there are many JSON web token implementations that you can use with your Node.js library. However, the ones that are not in active development could have critical vulnerabilities. One such vulnerability, which was reported by Auth0, lets anyone create their own "signed" tokens with whatever payload they want.

If a reasonably popular or well-used package had this flaw, the odds of a developer finding and patching the fault would be higher. But what about an inactive/abandoned project? We’ll talk about that in the next point.

5. Always Choose a Library That’s in Active Development

Perhaps the quickest and most efficient way to determine the activity of a specific package is to check its download rate on npm. You can find this in the Stats section of npm’s package page. It is also possible to extract these figures automatically using the npm stats API or by browsing historic stats on npm-stat.com. For packages with GitHub repositories, you should check out the commit history, the issue tracker, and any relevant pull requests for the library.

6. Update the Dependencies Frequently

There are many bugs, including a large number of security bugs that are continually unearthed and, in most cases, immediately patched. It is not uncommon to see recently reported vulnerabilities being fixed solely on the most recent branch/version of a given project.

For example, let's take the Regular Expression Denial of Service (ReDoS) vulnerability reported on the HMAC package ‘hawk’ in early 2016. This bug in hawk was quickly resolved, but only in the latest major version, 4.x. Older versions like 3.x were patched a lot later even though they were equally at risk.

Therefore, as a general rule, your dependencies are less likely to have any security bugs if they use the latest available version.

The easiest way to confirm if you’re using the latest version is by using the npm outdated command. This command supports the -prod flag to ignore any dev dependencies and --json to make automation simpler.

Regularly inspect the packages you use to verify their modification date. You can do this in two ways: via the npm UI, or by running npm view <package> time.modified.

Conclusion

The key to securing your application is to have a security-first culture from the start. In this post, we’ve covered some of the standard practices for improving the security of your JavaScript components.

Use open-source dependencies that are in active development.

Update and monitor your components.

Review your code and write tests.

Remove unwanted dependencies or use alternatives.

Use security tools like npm audit to analyze your dependencies.

If you have any thoughts about JavaScript security, feel free to share them in the comments.

]]>2018-08-22T14:00:00+00:00//www.4elements.com/blog/read/new-course-secure-your-wordpress-site-with-ssl
https://www.4elements.com/blog/read/new-course-secure-your-wordpress-site-with-ssl#When:07:55:10ZThese days, it's more important than ever for your WordPress site to use a SSL (Secure Sockets Layer) certificate, which encrypts the data between the client and the server. Browsers now mark sites as "secure" or "not secure", and using SSL can boost your search engine rankings. Plus, of course, there are the obvious security benefits.

In our new Coffee Break Course, Secure Your WordPress Site With SSL, Envato Tuts+ instructor Bilal Shahid will show you how to get and install free SSL certificates using Certbot and Let's Encrypt—a free and open certificate authority aiming to support a more secure and privacy-respecting web.

Watch the introduction below to find out more.

You can take our new Coffee Break Course straight away with a subscription to Envato Elements. For a single low monthly fee, you get access not only to this course, but also to our growing library of over 1,000 video courses and industry-leading eBooks on Envato Tuts+.

Plus you now get unlimited downloads from the huge Envato Elements library of 650,000+ creative assets. Create with unique fonts, photos, graphics and templates, and deliver better projects faster.

]]>2018-08-20T07:55:10+00:00//www.4elements.com/blog/read/testing-components-in-react-using-jest-and-enzyme
https://www.4elements.com/blog/read/testing-components-in-react-using-jest-and-enzyme#When:11:00:00ZThis is the second part of the series on Testing Components in React. If you have prior experience with Jest, you can skip ahead and use the GitHub code as a starting point.

In the previous article, we covered the basic principles and ideas behind test-driven development. We also set up the environment and the tools required for running tests in React. The toolset included Jest, ReactTestUtils, Enzyme, and react-test-renderer.

We then wrote a couple of tests for a demo application using ReactTestUtils and discovered its shortcomings compared to a more robust library like Enzyme.

In this post, we'll get a deeper understanding of testing components in React by writing more practical and realistic tests. You can head to GitHub and clone my repo before getting started.

Getting Started With the Enzyme API

Enzyme.js is an open-source library maintained by Airbnb, and it's a great resource for React developers. It uses the ReactTestUtils API underneath, but unlike ReactTestUtils, Enzyme offers a high-level API and easy-to-understand syntax. Install Enzyme if you haven't already.

The Enzyme API exports three types of rendering options:

shallow rendering

full DOM rendering

static rendering

Shallow rendering is used to render a particular component in isolation. The child components won't be rendered, and hence you won't be able to assert their behavior. If you're going to focus on unit tests, you'll love this. You can shallow render a component like this:

Full DOM rendering generates a virtual DOM of the component with the help of a library called jsdom. You can avail this feature by replacing the shallow() method with mount() in the above example. The obvious benefit is that you can render the child components also. If you want to test the behavior of a component with its children, you should be using this.

Static rendering is used to render react components to static HTML. It's implemented using a library called Cheerio, and you can read more about it in the docs.

First, I created a shallow-rendered DOM of the <ProductHeader/> component using shallow() and stored it in a variable. Then, I used the .find() method to find a node with tag 'h2'. It queries the DOM to see if there's a match. Since there is only one instance of the node, we can safely assume that node.length will be equal to 1.

The second test is very similar to the first one. The hasClass('title') method returns whether the current node has a className prop with value 'title'. We can verify the truthfulness using toBeTruthy().

Run the tests using yarn test, and both the tests should pass.

Well done! Now it's time to refactor the code. This is important from a tester's perspective because readable tests are easier to maintain. In the above tests, the first two lines are identical for both the tests. You can refactor them by using a beforeEach() function. As the name suggests, the beforeEach function gets called once before each spec in a describe block is executed.

The text() method is particularly useful in this case to retrieve the inner text of an element. Try writing an expectation for the product.status() and see if all the tests are passing.

For the final test, we're going to mount the ProductDetails component without any props. Then we're going to look for a class named '.product-error' and check if it contains the text "Sorry, Product doesn't exist".

That's it. We've successfully tested the <ProductDetails /> component in isolation. Tests of this type are known as unit tests.

Testing Callbacks Using Stubs and Spies

We just learned how to test props. But to truly test a component in isolation, you also need to test the callback functions. In this section, we'll write tests for the ProductList component and create stubs for callback functions along the way. Here are the assumptions that we need to assert.

The number of products listed should be equivalent to the number of objects the component receives as props.

The ProductList receives the product data through props. In addition to that, it receives a callback from the parent. Although you could write tests for the parent's callback function, that's not a great idea if your aim is to stick to unit tests. Since the callback function belongs to the parent component, incorporating the parent's logic will make the tests complicated. Instead, we are going to create a stub function.

What's a Stub?

A stub is a dummy function that pretends to be some other function. This allows you to independently test a component without importing either parent or child components. In the example above, we created a stub function called handleProductClick by invoking jest.fn().

Now we just need to find the all the <a> elements in the DOM and simulate a click on the first <a> node. After being clicked, we'll check if handleProductClick() was invoked. If yes, it's fair to say our logic is working as expected.

Enzyme lets you easily simulate user actions such as clicks using simulate() method. handlerProductClick.mock.calls.length returns the number of times the mock function was called. We expect it to be equal to 1.

The other test is relatively easy. You can use the find() method to retrieve all <a> nodes in the DOM. The number of <a> nodes should be equal to the length of the productData array that we created earlier.

Testing the Component's State, LifeCycleHook, and Method

Next up, we're going to test the ProductContainer component. It has a state, a lifecycle hook, and a class method. Here are the assertions that need to be verified:

componentDidMount is called exactly once.

The component's state is populated after the component mounts.

The handleProductClick() method should update the state when a product id is passed in as an argument.

To check whether componentDidMount was called, we're going to spy on it. Unlike a stub, a spy is used when you need to test an existing function. Once the spy is set, you can write assertions to confirm whether the function was called.

The third one is a bit tricky. We need to verify that handleProductClick is working as expected. If you head over to the code, you'll see that the handleProductClick() method takes a product id as input, and then updates this.state.selectedProduct with the details of that product.

To test this, we need to invoke the component's method, and you can actually do that by calling component.instance().handleProductClick(). We'll pass in a sample product id. In the example below, we use the id of the first product. Then, we can test whether the state was updated to confirm that the assertion is true. Here's the whole code:

We've written 10 tests, and if everything goes well, this is what you should see:

Summary

Phew! We've covered almost everything that you need to know to get started with writing tests in React using Jest and Enzyme. Now might be a good time to head over to the Enzyme website to have a deeper look at their API.

What are your thoughts on writing tests in React? I'd love to hear them in the comments.

Testing code is a confusing practice for many developers. That's understandable because writing tests requires more effort, time, and the ability to foresee possible use cases. Startups and developers working on smaller projects usually favor ignoring tests altogether because of the lack of resources and manpower.

However, there are a couple of reasons why I believe that you should test your components:

It makes you feel more confident about your code.

Tests enhance your productivity.

React isn't any different either. When your whole application starts to turn into a pile of components that are hard to maintain, testing offers stability and consistency. Writing tests from day one will help you write better code, spot bugs with ease, and maintain a better development workflow.

In this article, I will take you through everything that you need to know to write tests for your React components. I'll also cover some of the best practices and techniques while we're at it. Let's get started!

Testing Components in React

Testing is the process of verifying that our test assertions are true and that they stay true throughout the lifetime of the application. A test assertion is a boolean expression that returns true unless there is a bug in your code.

For instance, an assertion could be something as simple as this: "When the user navigates to /login, a modal with the id #login should be rendered." So, if it turns out that you messed up the login component somehow, the assertion would return false. Assertions are not just limited to what gets rendered—you can also make assertions about how the application responds to user interactions and other actions.

There are many automated testing strategies that front-end developers use to test their code. We will limit our discussion to just three software test paradigms that are popular with React: unit testing, functional testing, and integration testing.

Unit Testing

Unit testing is one of the test veterans that's still popular in testing circles. As the name suggests, you will be testing individual pieces of code to verify that they function independently as expected. Because of React's component architecture, unit tests are a natural fit. They're also faster because you don't have to rely on a browser.

Unit tests help you think of each component in isolation and treat them as functions. Your unit tests for a particular component should answer the following questions:

Are there any props? If yes, what does it do with them?

What components does it render?

Should it have a state? When or how should it update the state?

Is there a procedure that it should follow when it mounts or unmounts, or on user interaction?

Functional Testing

Functional tests are used to test the behavior of a part of your application. Functional tests are usually written from a user's perspective. A piece of functionality is usually not limited to a single component. It can be a full-fledged form or an entire page.

For instance, when you're building a signup form, it might involve components for the form elements, the alerts, and errors if any. The component that gets rendered after the form is submitted is also part of that functionality. This doesn't require a browser renderer because we'll be using an in-memory virtual DOM for our tests.

Integration Testing

Integration testing is a test strategy where all the individual components are tested as a group. Integrated testing attempts to replicate the user experience by running the tests on an actual browser. This is considerably slower than functional testing and unit tests because each test suite is executed on a live browser.

In React, unit tests and functional tests are more popular than integration tests because they are easier to write and maintain. That's what we will cover in this tutorial.

Know Your Tools

You need certain tools and dependencies to get started with unit and functional testing your React application. I've listed them below.

Jest Test Framework

Jest is a testing framework that requires zero configuration and is therefore easy to set up. It's more popular than test frameworks like Jasmine and Mocha because it's developed by Facebook. Jest is also faster than the rest because it uses a clever technique to parallelize test runs across workers. Apart from that, each test runs in a sandbox environment so as to avoid conflicts between two successive tests.

If you're using create-react-app, it comes shipped with Jest. If not, you might have to install Jest and a few other dependencies. You can read more about it on the official Jest documentation page.

react-test-renderer

Even if you're using create-react-app, you will need to install this package to render snapshots. Snapshot testing is a part of the Jest library. So, instead of rendering the UI of the entire application, you can use the test renderer to quickly generate a serializable HTML output from the virtual DOM. You can install it as follows:

yarn add react-test-renderer

ReactTestUtils and Enzyme

react-dom/test-utils consists of some of the test utilities provided by the React team. Alternatively, you can use the Enzyme package released by Airbnb. Enzyme is a whole lot better than ReactTestUtils because it is easy to assert, manipulate, and traverse your React Components’ output. We will start our tests with React utils and then transition to Enzyme later on.

There is more information about this in the Testing Components section of the create-react-app page.

Setting Up a Demo App and Organizing Tests

We will be writing tests for a simple demo application that displays a master/detail view of a list of products. You can find the demo application in our GitHub repo. The application consists of a container component known as ProductContainer and three presentational components: ProductList, ProductDetails, and ProductHeader.

This demo is a good candidate for unit testing and functional testing. You can test each component in isolation and/or test the product listing functionality as a whole.

Once you've downloaded the demo, create a directory with the name __tests__inside /src/components/. You can then store all the test files related to this functionality inside the __tests__ directory. Testers usually name their test files as either .spec.js or .test.js—for example, ProductHeader.test.js or ProductHeader.spec.js.

Writing Basic Tests in React

Create a ProductHeader.test.js file if you haven't already. Here is what our tests are basically going to look like:

src/components/__tests__/ProductList.test.js

The test suite starts with a describe block, which is a global Jest function that accepts two parameters. The first parameter is the title of the test suite, and the second parameter is the actual implementation. Each it() in a test suite corresponds to a test or a spec. A test contains one or more expectations that check the state of the code.

expects(true).toBeTruthy();

In Jest, an expectation is an assertion that either returns true or false. When all the assertions in a spec are true, it is said to pass. Otherwise, the test is said to fail.

For instance, we've created two test specs. The first one should obviously pass, and the second one should fail.

Note: toBeTruthy() is a predefined matcher. In Jest, each matcher makes a comparison between the expected value and the actual value and returns a boolean. There are many more matchers available, and we will have a look at them in a moment.

Running the Test Suite

create-react-app has set up everything that you need to execute the test suite. All you need to do is run the following command:

yarn test

You should see something like this:

To make the failing test pass, you have to replace the toBeTruthy() matcher with toBeFalsy().

expects(false).toBeFalsy();

That's it!

Using Matchers in Jest

As mentioned earlier, Jest uses matchers to compare values. You can use it to check equality, compare two numbers or strings, and verify the truthiness of expressions. Here is the list of popular matchers available in Jest.

toBe();

toBeNull()

toBeDefined()

toBeUndefined()

toBeTruthy()

toBeFalsy()

toBeGreaterThan()

toBeLesserThan()

toMatch()

toContain()

This is just a taste. You can find all the available matchers in the reference docs.

Testing a React Component

First, we'll be writing a couple of tests for the ProductHeader component. Open up the ProductHeader.js file if you haven't already.

src/components/ProductHeader.js

Are you curious to know why I used a class component here instead of a functional component? The reason is that it's harder to test functional components with ReactTestUtils. If you're curious to know why, this Stack Overflow discussion has the answer.

We could write a test with the following assumptions:

The component should render an h2 tag.

The h2 tag should have a class named title.

To render a component and to retrieve relevant DOM nodes, we need ReactTestUtils. Remove the dummy specs and add the following code:

To check for the existence of an h2 node, we will first need to render our React elements into a DOM node in the document. You can do that with the help of some of the APIs exported by ReactTestUtils. For instance, to render our <ProductHeader/> component, you can do something like this:

Try saving it, and your test runner should show you that the test has passed. That's somewhat surprising because we don't have an expect() statement like in our previous example. Most of the methods exported by ReactTestUtils have expectations built into them. In this particular case, if the test utility fails to find the h2 tag, it will throw an error and the tests will automatically fail.

Now, try creating the code for the second test. You can use findRenderedDOMcomponentWithClass() to check if there's any node with the class 'title'.

Conclusion

Although we just wrote two test specs, we've covered a lot of ground in the process. In the next article, we'll write some full-fledged tests for our product listing page. We'll also replace ReactTestUtils with Enzyme. Why? Enzyme offers a high-level interface that's very easy to use and developer-friendly. Stay tuned for the second part!

If at any point you feel stuck or need help, let us know in the comments.

]]>2018-08-10T12:47:26+00:00//www.4elements.com/blog/read/15-best-php-event-calendar-and-booking-scripts
https://www.4elements.com/blog/read/15-best-php-event-calendar-and-booking-scripts#When:03:04:43ZThere are several reasons PHP calendar, booking and events scripts might be a great addition to your website. If you’re a service provider, then it makes sense to have an appointment booking system on your site that allows potential customers to see your availability and select an appointment time and date that is best for them. This could cut down on needless calls to your business to make appointments and free up your time or your staff’s time.

Online calendars are also handy for organisations of any size to help team members share events and tasks and keep each other abreast of what other members are working on.

Their usefulness isn’t just limited to companies, however. Artists, writers, performers, bloggers and any other individual with an active public life could make good use of online calendars to let followers and fans know the whens and wheres of public appearances.

With all this in mind, we’ve compiled 15 of our best PHP calendar, booking and events scripts available for download today at CodeCanyon. This post will help you choose the one that’s right for you.

The Employee Work Schedule script is packed with features that companies will find useful for managing team productivity. The calendar can be set up to provide access to selected team members and allows the designated admin to assign tasks, or conversely team members can record their own self-assigned tasks or appointments.

Standout features:

public, private or group calendars

popup dialog for adding, editing and deleting items

ability for both admins and users to create calendars

supports recurring events

and more

User lara_c_2 says:

“One of the best scripts and support I have ever had. The author, Paul, was more than helpful and available. Great script, super flexible and offers many many more features than other similar software out there.”

The Events Calendar allows you to create several calendars dedicated to different locations or categories. Each event in the calendar comes with its own popup feature that allows you to add important details like starting and ending time for each event, textual description, photos, videos, location map, etc.

Standout features:

create an unlimited number of calendars

view event details via rollover popup

add videos and a photo gallery for each event

add the event website link

and more

User Jackaubert says:

“This is just a really nice clean calendar presentation. It is relatively easy to manage, once it is installed, and you can customize pretty much anything with enough digging into the code. Nothing is concealed or locked down.”

The NodAPS Online Booking System promises to help you manage your appointments more easily. You can create unlimited accounts with administrative, assistant and staff permission, and add unlimited languages to the system. You can also change the booking time and date with a drag-and-drop feature.

With Tiva Events Calendar, users can add and view all events in calendar or events list style, but also can view details of each event via a pop-up display when they hover over the event. The app also offers full and compact layouts.

Standout features:

view events via calendar or list style

quick view event’s info with tooltip

user-friendly interface

full layout or compact layout

and more

User georgeszy says:

“It's a simple straightforward calendar plug-in. But, the after sales support is fabulous. They helped me several times with issues I ran into and were very generous with their time and expertise.”

Bookify is one of the newest apps at CodeCanyon but is already proving to be quite popular with its offering of features such as an installation wizard that doesn't require any knowledge of code to use. Bookify also has Google Calendar sync, live chat, and detailed documentation.

Standout features:

Stripe and PayPal are supported

live chat

multi-language support

Google Calendar sync

and more

User Teatone says:

“Great customer support. Great code quality! If you are looking for something that works with great support, Bookify is the one.”

The Laravel Booking System with live chat offers a great online system for booking and making appointments. Users can buy credits as a payment option, and view available services, total transactions, their total credits as well as administrator contact information via their dashboard.

From the administrative side, the system administrator can manage all things system related: general settings, payment settings, and user management. Admins can also manage bookings and respond to inquiries from their dashboard.

Standout features:

live chat

multi-language support

booking and transaction history

PayPal integration

and more

User brentxscholl says:

“This plugin works great. Great code. Customer service is fantastic. We asked for extended features and they were delivered for a reasonable price.”

Quite simply, the eCalendar script is designed to keep individual users or companies organised with a calendar that allows users to add as many events as needed, as well as details like the event title, location, time, etc.

Standout features:

choice of two designs

cross-browser compatibility (IE8+, Safari, Opera, Chrome, Firefox)

events are saved in your MySQL database

fully responsive design

and more

User levitschi says:

“Everything works perfectly! Support was better than I ever expected!”

Built with Laravel 5.5 and VueJS, LaraBooking is another great online booking app with a clean and intuitive interface that allows users to book their appointments directly on the calendar. The calendar shows the times available, and users will receive email notifications and updates about their appointments.

Standout features:

responsive

user-friendly interface

email notification

admin has the ability to review and edit all appointments and services

Gcal is one of the more straightforward of the booking systems featured here. It is specifically designed for users of Google Calendar. Users fill in a form which, once submitted, automatically adds an event to the Google Calendar of the owner of the email address submitted in the form.

Standout features:

simple functionality

view event details in popup

three-step installation

well documented

and more

Conclusion

These PHP event calendar and booking scripts just scratch the surface of products available at Envato Market. So if none of them catch your fancy, there are plenty of other great options to hold your interest!

]]>2018-07-31T03:04:43+00:00//www.4elements.com/blog/read/set-up-routing-in-php-applications-using-the-symfony-routing-component
https://www.4elements.com/blog/read/set-up-routing-in-php-applications-using-the-symfony-routing-component#When:14:00:00ZToday, we'll go through the Symfony Routing component, which allows you to set up routing in your PHP applications.

What Is the Symfony Routing Component?

The Symfony Routing Component is a very popular routing component which is adapted by several frameworks and provides a lot of flexibility should you wish to set up routes in your PHP application.

If you've built a custom PHP application and are looking for a feature-rich routing library, the Symfony Routing Component is more than a worth a look. It also allows you to define routes for your application in the YAML format.

Starting with installation and configuration, we'll go through real-world examples to demonstrate a variety of options the component has for route configuration. In this article, you'll learn:

installation and configuration

how to set up basic routes

how to load routes from the YAML file

how to use the all-in-one router

Installation and Configuration

In this section, we're going to install the libraries that are required in order to set up routing in your PHP applications. I assume that you've installed Composer in your system as we'll need it to install the necessary libraries that are available on Packagist.

Once you've installed Composer, go ahead and install the core Routing component using the following command.

$composer require symfony/routing

Although the Routing component itself is sufficient to provide comprehensive routing features in your application, we'll go ahead and install a few other components as well to make our life easier and enrich the existing core routing functionality.

To start with, we'll go ahead and install the HttpFoundation component, which provides an object-oriented wrapper for PHP global variables and response-related functions. It makes sure that you don't need to access global variables like $_GET, $_POST and the like directly.

$composer require symfony/http-foundation

Next, if you want to define your application routes in the YAML file instead of the PHP code, it's the YAML component that comes to the rescue as it helps you to convert YAML strings to PHP arrays and vice versa.

$composer require symfony/yaml

Finally, we'll install the Config component, which provides several utility classes to initialize and deal with configuration values defined in the different types of file like YAML, INI, XML, etc. In our case, we'll use it to load routes from the YAML file.

$composer require symfony/config

So that's the installation part, but how are you supposed to use it? In fact, it's just a matter of including the autoload.php file created by Composer in your application, as shown in the following snippet.

<?php
require_once './vendor/autoload.php';
// application code
?>

Set Up Basic Routes

In the previous section, we went through the installation of the necessary routing components. Now, you're ready to set up routing in your PHP application right away.

Let's go ahead and create the basic_routes.php file with the following contents.

Initialize the Route Object for Different Routes

The first argument of the Route constructor is the URI path, and the second argument is the array of custom attributes that you want to return when this particular route is matched. Typically, it would be a combination of the controller and method that you would like to call when this route is requested.

The above route can match URIs like foo/1, foo/123 and similar. Please note that we've restricted the {id} parameter to numeric values only, and hence it won't match URIs like foo/bar since the {id} parameter is provided as a string.

Add All Route Objects to the RouteCollection Object

The next step is to add route objects that we've initialized in the previous section to the RouteCollection object.

As you can see, it's pretty straightforward as you just need to use the add method of the RouteCollection object to add route objects. The first argument of the add method is the name of the route, and the second argument is the route object itself.

Initialize the RequestContext Object

Next, we need to initialize the RequestContext object, which holds the current request context information. We'll need this object when we initialize the UrlMatcher object as we'll go through it in a moment.

How to Match Routes

It's the match method of the UrlMatcher object which allows you to match any route against a set of predefined routes.

The match method takes the URI as its first argument and tries to match it against predefined routes. If the route is found, it returns custom attributes associated with that route. On the other hand, it throws the ResourceNotFoundException exception if there's no route associated with the current URI.

$parameters = $matcher->match($context->getPathInfo());

In our case, we've provided the current URI by fetching it from the $context object. So, if you're accessing the http://your-domain/basic_routes.php/foo URL, the $context->getPathInfo() returns foo, and we've already defined a route for the foo URI, so it should return us the following.

Apart from this, you could also use the Routing component to generate links in your application. Provided RouteCollection and RequestContext objects, the UrlGenerator allows you to build links for specific routes.

The first argument of the generate method is the route name, and the second argument is the array that may contain parameters if it's the parameterized route. The above code should generate the /basic_routes.php/foo/123 URL.

Load Routes From the YAML File

In the previous section, we built our custom routes using the Route and RouteCollection objects. In fact, the Routing component offers different ways you could choose from to instantiate routes. You could choose from various loaders like YamlFileLoader, XmlFileLoader, and PhpFileLoader.

In this section, we'll go through the YamlFileLoader loader to see how to load routes from the YAML file.

We've used the YamlFileLoader loader to load routes from the routes.yaml file instead of initializing it directly in the PHP itself. Apart from that, everything is the same and should produce the same results as that of the basic_routes.php file.

The All-in-One Router

Lastly in this section, we'll go through the Router class, which allows you to set up routing quickly with fewer lines of code.

Go ahead and make the all_in_one_router.php file with the following contents.

With that in place, you can straight away use the match method of the Router object for route mapping.

$parameters = $router->match($requestContext->getPathInfo());

Also, you will need to use the getRouteCollection method of the Router object to fetch routes.

$routes = $router->getRouteCollection();

Conclusion

Go ahead and explore the other options available in the Routing component—I would love to hear your thoughts!

Today, we explored the Symfony Routing component, which makes implementation of routing in PHP applications a breeze. Along the way, we created a handful of examples to demonstrate various aspects of the Routing component.

I hope that you've enjoyed this article, and feel free to post your thoughts using the feed below!

]]>2018-07-13T14:00:00+00:00//www.4elements.com/blog/read/creating-pretty-popup-messages-using-sweetalert2
https://www.4elements.com/blog/read/creating-pretty-popup-messages-using-sweetalert2#When:13:36:53ZEvery now and then, you will have to show an alert box to your users to let them know about an error or notification. The problem with the default alert boxes provided by browsers is that they are not very attractive. When you are creating a website with great color combinations and fancy animation to improve the browsing experience of your users, the unstyled alert boxes will seem out of place.

In this tutorial, you will learn about a library called SweetAlert2 that allows us to create all kinds of alert messages which can be customized to match the look and feel of our own website.

Display Simple Alert Messages

Before you can show all those sweet alert messages to your users, you will have to install the library and include it in your project. If you are using npm or bower, you can install it by running the following commands:

npm install sweetalert2
bower install sweetalert2

You can also get a CDN link for the latest version of the library and include it in your webpage using script tags:

Once you have installed the library, creating a sweet alert is actually very easy. All you have to do is call the swal() function. Just make sure that the function is called after the DOM has loaded.

There are two ways to create a sweet alert using the swal() function. You can either pass the title, body text and icon value in three different arguments or you can pass a single argument as an object with different values as its key-value pairs. Passing everything in an object is useful when you want to specify values for multiple arguments.

When a single argument is passed and it is a string, the sweet alert will only show a title and an OK button. Users will be able to click anywhere outside the alert or on the OK button in order to dismiss it.

When two arguments are passed, the first one becomes the title and the second one becomes the text inside the alert. You can also show an icon in the alert box by passing a third argument. This can have any of the five predefined values: warning, error, success, info, and question. If you don't pass the third argument, no icon will be shown inside the alert message.

Configuration Options to Customize Alerts

If you simply want to show some basic information inside an alert box, the previous example will do just fine. However, the library can actually do a lot more than just simply show users some text inside an alert message. You can change every aspect of these alert messages to suit your own needs.

We have already covered the title, the text, and the icons inside a sweet alert message. There is also an option to change the buttons inside it and control their behavior. By default, an alert will only have a single confirm button with text that says "OK". You can change the text inside the confirm button by setting the value of the confirmButtonText property. If you also want to show a cancel button in your alert messages, all you have to do is set the value of showCancelButton to true. The text inside the cancel button can be changed using the cancelButtonText property.

Each of these buttons can be given a different background color using the confirmButtonColor and cancelButtonColor properties. The default color for the confirm button is #3085d6, while the default color for the cancel button is #aaa. If you want to apply any other customization on the confirm or cancel buttons, you can simply use the confirmButtonClass and cancelButtonClass properties to add a new class to them. Once the classes have been added, you will be able to use CSS to change the appearance of those buttons. You can also add a class on the main modal itself by using the customClass property.

If you interacted with the alert messages in the first example, you might have noticed that the modals can be closed by pressing either the Enter or Escape key. Similarly, you can also click anywhere outside the modal in order to dismiss it. This happens because the value of allowEnterKey, allowEscapeKey, and allowOutsideClick is set to true by default.

When you show two different buttons inside a modal, the confirm button is the one which is in focus by default. You can remove the focus from the confirm button by setting the value of focusConfirm to false. Similarly, you can also set the focus on the cancel button by setting the value of focusCancel to true.

The confirm button is always shown on the left side by default. You have the option to reverse the positions of the confirm and cancel buttons by setting the value of reverseButtons to true.

Besides changing the position and color of buttons inside the alert messages, you can also change the background and position of the alert message or the backdrop around it. Not only that, but the library also allows you to show your own custom icons or images in the alert messages. This can be helpful in a lot of situations.

You can customize the backdrop of a sweet alert using the backdrop property. This property accepts either a Boolean or a string as its value. By default, the backdrop of an alert message consists of mostly transparent gray color. You can hide it completely by setting the value of backdrop to false. Similarly, you can also show your own images in the background by setting the backdrop value as a string. In such cases, the whole value of the backdrop string is assigned to the CSS background property. The background of a sweet alert message can be controlled using the background property. All alert messages have a completely white background by default.

All the alert messages pop up at the center of the window by default. However, you can make them pop up from a different location using the position property. This property can have nine different values with self-explanatory names: top, top-start, top-end, center, center-start, center-end, bottom, bottom-start, and bottom-end.

You can disable the animation when a modal pops up by setting the value of the animation property to false. The library also provides a timer property which can be used to auto-close the timer once a specific number of milliseconds have passed.

In the following example, I have used different combinations of all the properties discussed in this section to create four different alert messages. This should demonstrate how you can completely change the appearance and behavior of a modal created by the SweetAlert2 library.

Important SweetAlert2 Methods

Initializing different sweet alert messages to show them to users is one thing, but sometimes you will also need access to methods which control the behavior of those alert messages after initialization. Fortunately, the SweetAlert2 library provides many methods that can be used to show or hide a modal as well as get its title, text, image, etc.

You can check if a modal is visible or hidden using the isVisible() method. You can also programmatically close an open modal by using the close() or closeModal() methods. If you happen to use the same set of properties for multiple alert messages during their initialization, you can simply call the setDefaults({configurationObject})method in the beginning to set the value of all those properties at once. The library also provides a resetDefaults() method to reset all the properties to their default values.

You can get the title, content, and image of a modal using the getTitle(), getContent(), and getImage() methods. Similarly, you can also get the HTML that makes up the confirm and cancel buttons using the getConfirmButton() and getCancelButton() methods.

There are a lot of other methods which can be used to perform other tasks like programmatically clicking on the confirm or cancel buttons.

Final Thoughts

The SweetAlert2 library makes it very easy for developers to create custom alert messages to show to their users by simply setting the values of a few properties. This tutorial was aimed at covering the basics of this library so that you can create your own custom alert messages quickly.

To prevent the post from getting too big, I have only covered the most commonly used methods and properties. If you want to read about all the other methods and properties which can be used to create advanced alert messages, you should go through the detailed documentation of the library.

Don't forget to check out the other JavaScript resources we have available in the Envato Market, as well.

Feel free to let me know if there is anything that you would like me to clarify in this tutorial.

]]>2018-06-30T13:36:53+00:00//www.4elements.com/blog/read/create-interactive-gradient-animations-using-granim.js
https://www.4elements.com/blog/read/create-interactive-gradient-animations-using-granim.js#When:13:36:37ZGradients can instantly improve the look and feel of a website, if used carefully with the right color combination. CSS has also come a long way when it comes to applying a gradient on any element and animating it. In this tutorial, we will move away from CSS and create gradient animations using a JavaScript library called Granim.js.

This library draws and animates gradients on a given canvas according to the parameters you set when creating a Granim instance. There are different methods which can be used to make your gradient respond to different user events like a button click. In this tutorial, we will learn about this library in detail and create some simple but nice gradient animation effects.

Create Solid Color Gradient Animations

Before we begin creating any gradient, you will have to include the library in your project. For this, you can either download Granim.js from GitHub or link directly to a CDN. The library version that I am using in this tutorial is 1.1. Some methods that we will discuss here were only added in version 1.1, so using an older library version when following this tutorial will not always give the expected result. Keeping these points in mind, let's create our first gradient using Granim.js.

Every time you create a new Granim instance, you can pass it an object of key-value pairs, where the key is the name of a particular property and the value is the value of the property. The element property is used to specify the CSS selector or DOM node which will point to the canvas on which you want to apply a particular gradient.

When you create a gradient animation where the colors change from a relatively light value to a darker value, it might become impossible to read some text that you have positioned on the canvas. For example, the initial gradient applied on an element might be a combination of yellow and light green. In such cases, the text of the canvas would have to be darker for users to be able to read it properly.

Similarly, the gradient might consist of dark red and black at some other point, and in such cases the dark text would not be easy to read. Granim.js solves this problem for you by allowing you to specify a container element on which you can add the dark and light classes to style the text or other elements accordingly. The value of the elToSetClassOn property is set to body by default, but you can also specify any other container element. The dark and light class names are updated automatically based on the average color of the gradient.

The elToSetClassOn property does not work by itself. You will also have to specify a name for the Granim instance that you created using the name property. If you set the name to something like first-gradient, the name of the classes applied on the container element will become first-gradient-light or first-gradient-dark based on how light or dark the gradient currently is. This way, any element which needs to change its color based on the lightness or darkness of the gradient will be able to do so with ease.

The direction in which a gradient should be drawn can be specified using the direction property. It has four valid values: diagonal, left-right, top-bottom, and radial. The gradients that you create will not move in those particular directions—they will just be drawn that way. The position of the gradient doesn't change during the animation; only its colors do.

There is also a states property, which accepts an object as its value. Each state specified inside the states object will have a name and a set of key-value pairs. You can use the gradients property to specify different colors which should make up a particular gradient. You can set the value of this property to be equal to an array of gradients.

Granim.js will automatically create an animation where the colors of the gradient change from one set to another. The transition between different gradients takes 5,000 milliseconds by default. However, you can speed up or slow down the animation by setting an appropriate value for the transitionSpeed property.

After the gradients start animating, they will have to come to an end at one point or another. You can specify if the gradient should then just stop there or start animating again from the beginning using the loop property. This is set to true by default, which means that the gradient would keep animating.

Each color in a gradient can have a different opacity, which can be specified using the opacity property. This property accepts an array to determine how opaque each color is going to be. For two gradient colors, the value can be [0.1, 0.8]. For three gradient colors, the value can be [1, 0.5, 0.75], etc.

You also have the option to specify the time it takes for the gradient animation to go from one state to another using the stateTransitionSpeed. This is different from the transitionSpeed property, which controls the animation speed inside the same state.

In the following code snippet, I have created two different Granim instances to draw different gradients. In the first case, we have only specified a single gradient, so there is not any actual animation and the colors don't change at all.

Animate Gradients Over an Image

Another common use of the Granim.js library would be to animate a gradient over an image drawn on the canvas. You can specify different properties to control how the image is drawn on the canvas using the image property. It accepts an object with key-value pairs as its value. You can use the source property to specify the path from which the library should get the image to draw it on the canvas.

Any image that you draw on the canvas will be drawn so that its center coincides with the center of the canvas. However, you can use the position property to specify a different position to draw the image. This property accepts an array of two elements as its value. The first element can have the values left, center, and right. The second element can have the values top, center, and bottom.

These properties are generally useful when you know that the size of the canvas and the image won't match. In these situations, you can use this property to specify the part of the image that should appear on the canvas.

If the images and the canvas have different dimensions, you can also stretch the image so that it fits properly inside the canvas. The stretchMode property also accepts an array of two elements as its value. Three valid values for both these elements are stretch, stretch-if-smaller, and stretch-if-larger.

A gradient with blend mode set to normal will completely hide the image underneath it. The only way to show an image below a gradient of solid colors would be to choose a different blend mode. You can read about all the possible blend mode values for a canvas on MDN.

I would like to point out that the ability to animate a gradient over an image was only added in version 1.1 of the Granim.js library. So you will have to use any version higher than that if you want this feature to work properly.

Methods to Control Gradient Animation Playback

Up to this point, we did not have any control over the playback of the gradient animation once it was instantiated. We could not pause/play it or change its state, direction, etc. The Granim.js library has different methods which let you accomplish all these tasks with ease.

You can play or pause any animation using the play() and pause() methods. Similarly, you can change the state of the gradient animation using the changeState('state-name') method. The state-name here has to be one of the state names that you defined when instantiating the Granim instance.

More methods were added in version 1.1 which allow you to change the direction and blend mode of an animation on the fly using the changeDirection('direction-name') and changeBlendingMode('blending-mode-name') methods.

In the following code snippet, I am using a button click event to call all these methods, but you can use any other event to call them.

Final Thoughts

In this tutorial, I have covered the basics of the Granim.js library so that you can get started with it as quickly as possible. There are a few other methods and properties that you might find useful when creating these gradient animations. You should read the official documentation in order to read about them all.

If you’re looking for additional JavaScript resources to study or to use in your work, check out what we have available in the Envato Market.

If you have any questions related to this tutorial, feel free to let me know in the comments.

]]>2018-06-28T13:36:37+00:00//www.4elements.com/blog/read/how-to-build-complex-large-scale-vue.js-apps-with-vuex
https://www.4elements.com/blog/read/how-to-build-complex-large-scale-vue.js-apps-with-vuex#When:12:47:47ZIt's so easy to learn and use Vue.js that anyone can build a simple application with that framework. Even novices, with the help of Vue's documentation, can do the job. However, when complexity comes into play, things get a bit more serious. The truth is that multiple, deeply nested components with shared state can quickly turn your application into an unmaintainable mess.

The main problem in a complex application is how to manage the state between components without writing spaghetti code or producing side effects. In this tutorial you'll learn how to solve that problem by using Vuex: a state management library for building complex Vue.js applications.

What Is Vuex?

Vuex is a state management library specifically tuned for building complex, large-scale Vue.js applications. It uses a global, centralized store for all the components in an application, taking advantage of its reactivity system for instant updates.

The Vuex store is designed in such a way that it is not possible to change its state from any component. This ensures that the state can only be mutated in a predictable manner. Thus your store becomes a single source of truth: every data element is only stored once and is read-only to prevent the application's components from corrupting the state that is accessed by other components.

Why Do You Need Vuex?

You may ask: Why do I need Vuex in the first place? Can't I just put the shared state in a regular JavaScript file and import it into my Vue.js application?

You can, of course, but compared to a plain global object, the Vuex store has some significant advantages and benefits:

The Vuex store is reactive. Once components retrieve a state from it, they will reactively update their views every time the state changes.

Components cannot directly mutate the store's state. The only way to change the store's state is by explicitly committing mutations. This ensures every state change leaves a trackable record, which makes the application easier to debug and test.

The Vuex store gives you a bird's eye view of how everything is connected and affected in your application.

It's easier to maintain and synchronize the state between multiple components, even if the component hierarchy changes.

Vuex makes direct cross-component communication possible.

If a component is destroyed, the state in the Vuex store will remain intact.

Getting Started With Vuex

Before we get started, I want to make several things clear.

First, to follow this tutorial, you need to have a good understanding of Vue.js and its components system, or at least minimal experience with the framework.

Also, the aim of this tutorial is not to show you how to build an actual complex application; the aim is to focus your attention more on Vuex concepts and how you can use them to build complex applications. For that reason, I'm going to use very plain and simple examples, without any redundant code. Once you fully grasp the Vuex concepts, you will be able to apply them on any level of complexity.

Setting Up a Vuex Project

The first step to get started with Vuex is to have Vue.js and Vuex installed on your machine. There are several ways to do that, but we'll use the easiest one. Just create an HTML file and add the necessary CDN links:

I used some CSS to make the components look nicer, but you don't need to worry about that CSS code. It only helps you to gain a visual notion about what is going on. Just copy and paste the following inside the <head> tag:

Here, we have a Vue instance, a parent component, and two child components. Each component has a heading "Score:" where we'll output the app state.

The last thing you need to do is to put a wrapping <div> with id="app" right after the opening <body>, and then place the parent component inside:

<div id="app">
<parent/>
</div>

The preparation work is now done, and we're ready to move on.

Exploring Vuex

State Management

In real life, we deal with complexity by using strategies to organize and structure the content we want to use. We group related things together in different sections, categories, etc. It's like a book library, in which the books are categorized and put in different sections so that we can easily find what we are looking for. Vuex arranges the application data and logic related to state in four groups or categories: state, getters, mutations, and actions.

State and mutations are the base for any Vuex store:

state is an object that holds the state of the application data.

mutations is also an object containing methods which affect the state.

Getters and actions are like logical projections of state and mutations:

getters contain methods used to abstract the access to the state, and to do some preprocessing jobs, if needed (data calculating, filtering, etc.).

actions are methods used to trigger mutations and execute asynchronous code.

Let's explore the following diagram to make things a bit clearer:

On the left side, we have an example of a Vuex store, which we'll create later on in this tutorial. On the right side, we have a Vuex workflow diagram, which shows how the different Vuex elements work together and communicate with each other.

In order to change the state, a particular Vue component must commit mutations (e.g. this.$store.commit('increment', 3)), and then, those mutations change the state (score becomes 3). After that, the getters are automatically updated thanks to Vue's reactive system, and they render the updates in the component's view (with this.$store.getters.score).

Mutations cannot execute asynchronous code, because this would make it impossible to record and track the changes in debug tools like Vue DevTools. To use asynchronous logic, you need to put it in actions. In this case, a component will first dispatch actions (this.$store.dispatch('incrementScore', 3000)) where the asynchronous code is executed, and then those actions will commit mutations, which will mutate the state.

Create a Vuex Store Skeleton

Now that we've explored how Vuex works, let's create the skeleton for our Vuex store. Put the following code above the ChildB component registration:

State Properties

The state object contains all of the shared data in your application. Of course, if needed, each component can have its own private state too.

Imagine that you want to build a game application, and you need a variable to store the game's score. So you put it in the state object:

state: {
score: 0
}

Now, you can access the state's score directly. Let's go back to the components and reuse the data from the store. In order to be able to reuse reactive data from the store's state, you should use computed properties. So let's create a score() computed property in the parent component:

computed: {
score () {
return this.$store.state.score
}
}

In the parent component's template, put the {{ score }} expression:

<h1> Score: {{ score }} </h1>

And now, do the same for the two child components.

Vuex is so smart that it will do all the work for us to reactively update the score property whenever the state changes. Try to change the score's value and see how the result updates in all three components.

Creating Getters

It is, of course, good that you can reuse the this.$store.state keyword inside the components, as you saw above. But imagine the following scenarios:

In a large-scale application, where multiple components access the state of the store by using this.$store.state.score, you decide to change the name of score. This means that you have to change the name of the variable inside each and every component that uses it!

You want to use a computed value of the state. For example, let's say you want to give players a bonus of 10 points when the score reaches 100 points. So, when the score hits 100 points, 10 points bonus are added. This means each component has to contain a function that reuses the score and increments it by 10. You will have repeated code in each component, which is not good at all!

Fortunately, Vuex offers a working solution to handle such situations. Imagine the centralized getter that accesses the store's state and provides a getter function to each of the state's items. If needed, this getter can apply some computation to the state's item. And if you need to change the names of some of the state's properties, you only change them in one place, in this getter.

Let's create a score() getter:

getters: {
score (state){
return state.score
}
}

A getter receives the state as its first argument, and then uses it to access the state's properties.

Note: Getters also receive getters as the second argument. You can use it to access the other getters in the store.

In all components, modify the score() computed property to use the score() getter instead of the state's score directly.

computed: {
score () {
return this.$store.getters.score
}
}

Now, if you decide to change the score to result, you need to update it only in one place: in the score() getter. Try it out in this CodePen!

Creating Mutations

Mutations are the only permissible way to change the state. Triggering changes simply means committing mutations in component methods.

A mutation is pretty much an event handler function that is defined by name. Mutation handler functions receive a state as a first argument. You can pass an additional second argument too, which is called the payload for the mutation.

Let's create an increment() mutation:

mutations: {
increment (state, step) {
state.score += step
}
}

Mutations cannot be called directly! To perform a mutation, you should call the commit() method with the name of the corresponding mutation and possible additional parameters. It might be just one, like the step in our case, or there might be multiple ones wrapped in an object.

Let's use the increment() mutation in the two child components by creating a method named changeScore():

methods: {
changeScore (){
this.$store.commit('increment', 3);
}
}

We are committing a mutation instead of changing this.$store.state.score directly, because we want to explicitly track the change made by the mutation. This way, we make our application logic more transparent, traceable, and easy to reason about. In addition, it makes it possible to implement tools, like Vue DevTools or Vuetron, that can log all mutations, take state snapshots, and perform time-travel debugging.

Now, let's put the changeScore() method into use. In each template of the two child components, create a button and add a click event listener to it:

<button @click="changeScore">Change Score</button>

When you click the button, the state will be incremented by 3, and this change will be reflected in all components. Now we have effectively achieved direct cross-component communication, which is not possible with the Vue.js built-in "props down, events up" mechanism. Check it out in our CodePen example.

Creating Actions

An action is just a function that commits a mutation. It changes the state indirectly, which allows for the execution of asynchronous operations.

Actions get the context as the first parameter, which contains all methods and properties from the store. Usually, we just extract the parts we need by using ES2015 argument destructuring. The commit method is one we need very often. Actions also get a second payload argument, just like mutations.

To call an action, we use the dispatch() method with the name of the corresponding action and additional parameters, just as with mutations.

Now, the Change Score button from the ChildA component will increment the score by 3. The identical button from the ChildB component will do the same, but after a delay of 3 seconds. In the first case, we're executing synchronous code and we use a mutation, but in the second case we're executing asynchronous code, and we need to use an action instead. See how it all works in our CodePen example.

Vuex Mapping Helpers

Vuex offers some useful helpers which can streamline the process of creating state, getters, mutations, and actions. Instead of writing those functions manually, we can tell Vuex to create them for us. Let's see how it works.

Instead of writing the score() computed property like this:

computed: {
score () {
return this.$store.state.score
}
}

We just use the mapState() helper like this:

computed: {
...Vuex.mapState(['score'])
}

And the score() property is created automatically for us.

The same is true for the getters, mutations, and actions.

To create the score() getter, we use the mapGetters() helper:

computed: {
...Vuex.mapGetters(['score'])
}

To create the changeScore() method, we use the mapMutations() helper like this:

methods: {
...Vuex.mapMutations({changeScore: 'increment'})
}

When used for mutations and actions with the payload argument, we must pass that argument in the template where we define the event handler:

<button @click="changeScore(3)">Change Score</button>

If we want changeScore() to use an action instead of a mutation, we use mapActions() like this:

methods: {
...Vuex.mapActions({changeScore: 'incrementScore'})
}

Again, we must define the delay in the event handler:

<button @click="changeScore(3000)">Change Score</button>

Note: All mapping helpers return an object. So, if we want to use them in combination with other local computed properties or methods, we need to merge them into one object. Fortunately, with the object spread operator (...), we can do it without using any utility.

Making the Store More Modular

It seems that the problem with complexity constantly obstructs our way. We solved it before by creating the Vuex store, where we made the state management and component communication easy. In that store, we have everything in one place, easy to manipulate and easy to reason about.

However, as our application grows, this easy-to-manage store file becomes larger and larger and, as a result, harder to maintain. Again, we need some strategies and techniques for improving the application structure by returning it to its easy-to-maintain form. In this section, we'll explore several techniques which can help us in this undertaking.

Using Vuex Modules

Vuex allows us to split the store object into separate modules. Each module can contain its own state, mutations, actions, getters, and other nested modules. After we've created the necessary modules, we register them in the store.

In the above example, we created two modules, one for each child component. The modules are just plain objects, which we register as scoreBoard and resultBoard in the modules object inside the store. The code for childA is the same as that in the store from the previous examples. In the code for childB, we add some changes in values and names.

Let's now tweak the ChildB component to reflect the changes in the resultBoard module.

Namespaced Modules

If you want or need to use one and the same name for a particular property or method in your modules, then you should consider namespacing them. Otherwise you may observe some strange side effects, such as executing all the actions with the same names, or getting the wrong state's values.

To namespace a Vuex module, you just set the namespaced property to true.

In the above example, we made the property and method names the same for the two modules. And now we can use a property or method prefixed with the name of the module. For example, if we want to use the score() getter from the resultBoard module, we type it like this: resultBoard/score. If we want the score() getter from the scoreBoard module, then we type it like this: scoreBoard/score.

As you can see in our CodePen example, we can now use the method or property we want and get the result we expect.

Splitting the Vuex Store Into Separate Files

In the previous section, we improved the application structure to some extent by separating the store into modules. We made the store cleaner and more organized, but still all of the store code and its modules lie in the same big file.

So the next logical step is to split the Vuex store into separate files. The idea is to have an individual file for the store itself and one for each of its objects, including the modules. This means having separate files for the state, getters, mutations, actions, and for each individual module (store.js, state.js, getters.js, etc.) You can see an example of this structure at the end of the next section.

Using Vue Single File Components

We've made the Vuex store as modular as we can. The next thing we can do is to apply the same strategy to the Vue.js components too. We can put each component in a single, self-contained file with a .vue extension. To learn how this works, you can visit the Vue Single File Components documentation page.

So, in our case, we'll have three files: Parent.vue, ChildA.vue, and ChildB.vue.

Finally, if we combine all three techniques, we'll end up with the following or similar structure:

Recap

Let's recap some main points you need to remember about Vuex:

Vuex is a state management library that helps us to build complex, large-scale applications. It uses a global, centralized store for all the components in an application. To abstract the state, we use getters. Getters are pretty much like computed properties and are an ideal solution when we need to filter or calculate something on runtime.

The Vuex store is reactive, and components cannot directly mutate the store's state. The only way to mutate the state is by committing mutations, which are synchronous transactions. Each mutation should perform only one action, must be as simple as possible, and is only responsible for updating a piece of the state.

Asynchronous logic should be encapsulated in actions. Each action can commit one or more mutations, and one mutation can be committed by more than one action. Actions can be complex, but they never change the state directly.

Finally, modularity is the key to maintainability. To deal with complexity and make our code modular, we use the "divide and conquer" principle and the code splitting technique.

Conclusion

That's it! You already know the main concepts behind Vuex, and you are ready to start applying them in practice.

For the sake of brevity and simplicity, I intentionally omitted some details and features of Vuex, so you'll need to read the full Vuex documentation to learn everything about Vuex and its feature set.

]]>2018-06-12T12:47:47+00:00//www.4elements.com/blog/read/creating-stylish-and-responsive-progress-bars-using-progressbar.js
https://www.4elements.com/blog/read/creating-stylish-and-responsive-progress-bars-using-progressbar.js#When:13:34:00ZNothing on the web happens instantly. The only difference is in the time it takes for a process to complete. Some processes can happen in a few milliseconds, while others can take up to several seconds or minutes. For example, you might be editing a very large image uploaded by your users, and this process can take some time. In such cases, it is a good idea to let the visitors know that the website is not stuck somewhere but it is actually working on your image and making some progress.

One of the most common ways to show readers how much a process has progressed is to use progress bars. In this tutorial, you will learn how to use the ProgressBar.js library to create different progress bars with simple and complex shapes.

Creating a Basic Progress Bar

Once you have included the library in your project, creating a progress bar using this library is easy. ProgressBar.js is supported in all major browsers, including IE9+, which means that you can use it in any website you are creating with confidence. You can get the latest version of the library from GitHub or directly use a CDN link to add it in your project.

To avoid any unexpected behavior, please make sure that the container of the progress bar has the same aspect ratio as the progress bar. In the case of a circle, the aspect ratio of the container should be 1:1 because the width will be equal to the height. In the case of a semicircle, the aspect ratio of the container should be 2:1 because the width will be double the height. Similarly, in the case of a simple line, the container should have an aspect ratio of 100:strokeWidth for the line.

When creating progress bars with a line, circle, or semicircle, you can simply use the ProgressBar.Shape() method to create the progress bar. In this case, the Shape can be a Circle, Line, or SemiCircle. You can pass two parameters to the Shape() method. The first parameter is a selector or DOM node to identify the container of the progress bar. The second parameter is an object with key-value pairs which determine the appearance of the progress bar.

You can specify the color of the progress bar using the color property. Any progress bar that you create will have a dark gray color by default. The thickness of the progress bar can be specified using the strokeWidth property. You should keep in mind that the width here is not in pixels but in terms of a percentage of the canvas size. For instance, if the canvas is 200px wide, a strokeWidth value of 5 will create a line which is 10px thick.

Besides the main progress bar, the library also allows you to draw a trailing line which will show readers the path on which the progress bar will move. The color of the trail line can be specified using the trailColor property, and its width can be specified using the trailWidth property. Just like strokeWidth, the trailWidth property also computes the width in percentage terms.

The total time taken by the progress bar to go from its initial state to its final state can be specified using the duration property. By default, a progress bar will complete its animation in 800 milliseconds.

You can use the easing property to specify how a progress bar should move during the animation. All progress bars will move with a linear speed by default. To make the animation more appealing, you can set this value to something else like easeIn, easeOut, easeInOut, or bounce.

After specifying the initial parameter values, you can animate the progress bars using the animate() method. This parameter accepts three parameters. The first parameter is the amount up to which you want to animate the progress line. The two other parameters are optional. The second parameter can be used to override any animation property values that you set during initialization. The third parameter is a callback function to do something else once the animation ends.

In the following example, I have created three different progress bars using all the properties we have discussed so far.

Animating Text Values With the Progress Bar

The only thing that changes with the animation of the progress bars in the above example is their length. However, ProgressBar.js also allows you to change other physical attributes like the width and color of the stroking line. In such cases, you will have to specify the initial values for the progress bar inside the from parameter and the final values inside the to parameter when initializing the progress bars.

You can also tell the library to create an accompanying text element with the progress bar to show some textual information to your users. The text can be anything from a static value to a numerical value indicating the progress of the animation. The text parameter will accept an object as its value.

This object can have a value parameter to specify the initial text to be shown inside the element. You can also provide a class name to be added to the text element using the className parameter. If you want to apply some inline styles to the text element, you can specify them all as a value of the style parameter. All the default styles can be removed by setting the value of style to null. It is important to remember that the default values only apply if you have not set a custom value for any CSS property inside style.

The value inside the text element will stay the same during the whole animation if you don't update it yourself. Luckily, ProgressBar.js also provides a step parameter which can be used to define a function to be called with each animation step. Since this function will be called multiple times each second, you need to be careful with its use and keep the calculations inside it simple.

Creating Progress Bars With Custom Shapes

Sometimes, you might want to create progress bars with different shapes that match the overall theme of your website. ProgressBar.js allows you to create progress bars with custom shapes using the Path() method. This method works like Shape() but provides fewer parameters to customize the progress bar animation. You can still provide a duration and easing value for the animation. If you want to animate the color and width of the stroke used for drawing the custom path, you can do so inside the from and to parameters.

The library does not provide any way to draw a trail for the custom path, as it did for simple lines and circles. However, you can create the trail yourself fairly easily. In the following example, I have created a triangular progress bar using the Path() method.

Before writing the JavaScript code, we will have to define our custom SVG path in HTML. Here is the code I used to create a simple triangle:

You might have noticed that I created two different path elements. The first path has a light gray color which acts like the trail we saw with simple progress bars in the previous section. The second path is the one that we animate with our code. We have given it an id which is used to identify it in the JavaScript code below.

Final Thoughts

As you saw in this tutorial, ProgressBar.js allows you to easily create different kinds of progress bars with ease. It also gives you the option to animate different attributes of the progress bar like its width and color.

Not only that, but you can also use this library to change the value of an accompanying text element in order to show the progress in textual form. This tutorial covers everything that you need to know to create simple progress bars. However, you can go through the documentation to learn more about the library.

If there is anything that you would like me to clarify in this tutorial, feel free to let me know in the comments.

]]>2018-06-11T13:34:00+00:00//www.4elements.com/blog/read/set-up-an-oauth2-server-using-passport-in-laravel
https://www.4elements.com/blog/read/set-up-an-oauth2-server-using-passport-in-laravel#When:13:30:15ZIn this article, we’re going to explore how you could set up a fully fledged OAuth2 server in Laravel using the Laravel Passport library. We’ll go through the necessary server configurations along with a real-world example to demonstrate how you could consume OAuth2 APIs.

I assume that you’re familiar with the basic OAuth2 concepts and flow as we’re going to discuss them in the context of Laravel. In fact, the Laravel Passport library makes it pretty easy to quickly set up an OAuth2 server in your application. Thus, other third-party applications are able to consume APIs provided by your application.

In the first half of the article, we’ll install and configure the necessary libraries, and the second half goes through how to set up demo resources in your application and consume them from third-party applications.

Server Configurations

In this section, we're going to install the dependencies that are required in order to make the Passport library work with Laravel. After installation, there's quite a bit of configuration that we'll need to go through so that Laravel can detect the Passport library.

Let's go ahead and install the Passport library using composer.

$composer require laravel/passport

That's pretty much it as far as the Passport library installation is concerned. Now let's make sure that Laravel knows about it.

Working with Laravel, you're probably aware of the concept of a service provider that allows you to configure services in your application. Thus, whenever you want to enable a new service in your Laravel application, you just need to add an associated service provider entry in the config/app.php.

If you're not aware of Laravel service providers yet, I would strongly recommend that you do yourself a favor and go through this introductory article that explains the basics of service providers in Laravel.

In our case, we just need to add the PassportServiceProvider provider to the list of service providers in config/app.php as shown in the following snippet.

Next, we need to generate a pair of public and private keys that will be used by the Passport library for encryption. As expected, the Passport library provides an artisan command to create it easily.

$php artisan passport:install

That should have created keys at storage/oauth-public.key and storage/oauth-private.key. It also creates some demo client credentials that we'll get back to later.

Moving ahead, let's oauthify the existing User model class that Laravel uses for authentication. To do that, we need to add the HasApiTokens trait to the User model class. Let's do that as shown in the following snippet.

The HasApiTokens trait contains helper methods that are used to validate tokens in the request and check the scope of resources being requested in the context of the currently authenticated user.

Further, we need to register the routes provided by the Passport library with our Laravel application. These routes will be used for standard OAuth2 operations like authorization, requesting access tokens, and the like.

In the boot method of the app/Providers/AuthServiceProvider.php file, let's register the routes of the Passport library.

So far, we've done everything that's required as far as the OAuth2 server configuration is concerned.

Set Up the Demo Resources

In the previous section, we did all the hard work to set up the OAuth2 authentication server in our application. In this section, we'll set up a demo resource that could be requested over the API call.

We will try to keep things simple. Our demo resource returns the user information provided that there's a valid uid parameter present in the GET request.

Let's create a controller file app/Http/Controllers/UserController.php with the following contents.

Although we've defined it as /user/get, the effective API route is /api/user/get, and that's what you should use when you request a resource over that route. The api prefix is automatically handled by Laravel, and you don't need to worry about that!

In the next and last section, we'll discuss how you could create client credentials and consume the OAuth2 API.

How to Consume OAuth2 APIs

Now that we've set up the OAuth2 server in our application, any third party can connect to our server with OAuth and consume the APIs available in our application.

First of all, third-party applications must register with our application in order to be able to consume APIs. In other words, they are considered as client applications, and they will receive a client id and client secret upon registration.

The Passport library provides an artisan command to create client accounts without much hassle. Let's go ahead and create a demo client account.

$php artisan passport:client
Which user ID should the client be assigned to?:
> 1
What should we name the client?:
> Demo OAuth2 Client Account
Where should we redirect the request after authorization? [http://localhost/auth/callback]:
> http://localhost/oauth2_client/callback.php
New client created successfully.
Client ID: 1
Client secret: zMm0tQ9Cp7LbjK3QTgPy1pssoT1X0u7sg0YWUW01

When you run the artisan passport:client command, it asks you a few questions before creating the client account. Out of those, there's an important one that asks you the callback URL.

The callback URL is the one where users will be redirected back to the third-party end after authorization. And that's where the authorization code that is supposed to be used in exchange for the access token will be sent. We are about to create that file in a moment.

Now, we're ready to test OAuth2 APIs in the Laravel application.

For demonstration purposes, I'll create the oauth2_client directory under the document root in the first place. Ideally, these files will be located at the third-party end that wants to consume APIs in our Laravel application.

Let's create the oauth2_client/auth_redirection.php file with the following contents.

Again, make sure to adjust the URLs and client credentials according to your setup in the above file.

How It Works Altogether

In this section, we'll test it altogether from the perspective of an end user. As an end user, there are two applications in front of you:

The first one is the Laravel application that you already have an account with. It holds your information that you could share with other third-party applications.

The second one is the demo third-party client application, auth_redirection.php and callback.php, that wants to fetch your information from the Laravel application using the OAuth API.

The flow starts from the third-party client application. Go ahead and open the http://localhost/oauth2_client/auth_redirection.php URL in your browser, and that should redirect you to the Laravel application. If you're not already logged into the Laravel application, the application will ask you to do so in the first place.

Once the user is logged in, the application displays the authorization page.

If the user authorizes that request, the user will be redirected back to the third-party client application at http://localhost/oauth2_client/callback.php along with the code as the GET parameter that contains the authorization code.

Once the third-party application receives the authorization code, it could exchange that code with the Laravel application to get the access token. And that's exactly what it has done in the following snippet of the oauth2_client/callback.php file.

Next, the third-party application checks the response of the CURL request to see if it contains a valid access token in the first place.

As soon as the third-party application gets the access token, it could use that token to make further API calls to request resources as needed from the Laravel application. Of course, the access token needs to be passed in every request that's requesting resources from the Laravel application.

We've tried to mimic the use-case in that the third-party application wants to access the user information from the Laravel application. And we've already built an API endpoint, http://your-laravel-site-url/api/user/get, in the Laravel application that facilitates it.

So that's the complete flow of how you're supposed to consume the OAuth2 APIs in Laravel.

And with that, we’ve reached the end of this article.

Conclusion

Today, we explored the Passport library in Laravel, which allows us to set up an OAuth2 server in an application very easily.

For those of you who are either just getting started with Laravel or looking to expand your knowledge, site, or application with extensions, we have a variety of things you can study in Envato Market.

Don't hesitate to share your thoughts and queries using the feed below!

]]>2018-06-08T13:30:15+00:00//www.4elements.com/blog/read/getting-started-with-redux-connecting-redux-with-react
https://www.4elements.com/blog/read/getting-started-with-redux-connecting-redux-with-react#When:19:01:08ZThis is the third part of the series on Getting Started With Redux, and in this tutorial, we're going to learn how to connect a Redux store with React. Redux is an independent library that works with all the popular front-end libraries and frameworks. And it works flawlessly with React because of its functional approach.

You don't need to have followed the previous parts of this series for this tutorial to make sense. If you're here to learn about using React with Redux, you can take the Quick Recap below and then check out the code from the previous part and start from there.

Quick Recap

In the first post, we learned about the Redux workflow and answered the question, Why Redux? We created a very basic demo application and showed you how the various components of Redux—actions, reducers, and the store— are connected.

In the previous post, we started building a contact list application that lets you add contacts and then displays them as a list. We created a Redux store for our contact list, and we added a few reducers and actions. We attempted to dispatch actions and retrieve the new state using store methods like store.dispatch() and store.getState().

By the end of this article, you'll have learned:

the difference between container components and presentational components

about the react-redux library

how to bind react and redux using connect()

how to dispatch actions using mapDispatchToProps

how to retrieve state using mapStateToProps

The code for the tutorial is available on GitHub in the react-redux-demo repo. Grab the code from the v2 branch and use that as a starting point for this tutorial. If you're curious to know how the application looks by the end of this tutorial, try the v3 branch. Let's get started.

Designing a Component Hierarchy: Smart vs. Dumb Components

This is a concept that you've probably heard of before, but let's have a quick look at the difference between smart and dumb components. Recall that we created two separate directories for components, one named containers/ and the other components/. The benefit of this approach is that the behavior logic is separated from the view.

The presentational components are said to be dumb because they are concerned about how things look. They are decoupled from the business logic of the application and receive data and callbacks from a parent component exclusively via props. They don't care if your application is connected to a Redux store if the data is coming from the local state of the parent component.

The container components, on the other hand, deal with the behavioral part and should contain very limited DOM markup and style. They pass the data that needs to be rendered to the dumb components as props.

This is an HTML form for adding a new contact. The component receives onInputChange and onFormSubmit callbacks as props. The onInputChange event is triggered when the input value changes and onFormSubmit when the form is being submitted.

This component receives an array of contact objects as props, hence the name ContactList. We use the Array.map() method to extract individual contact details and then pass on that data to <ContactCard />.

The returnContactList() function retrieves the array of contact objects and passes it to the ContactList component. Since returnContactList() retrieves the data from the store, we'll leave that logic blank for the moment.

We've created three bare-bones handler methods that correspond to the three actions. They all dispatch actions to update the state. In the render method, we've left out the logic for showing/hiding the form because we need to fetch the state.

Now let's see how to bind react and redux together

The react-redux Library

React bindings are not available in Redux by default. You will need to install an extra library called react-redux first.

npm install --save react-redux

The library exports just two APIs that you need to remember, a <Provider /> component and a higher-order function known as connect().

The Provider Component

Libraries like Redux need to make the store data accessible to the whole React component tree, starting from the root component. The Provider pattern allows the library to pass the data from top to bottom. The code below demonstrates how Provider magically adds the state to all the components in the component tree.

Demo Code

The entire app needs to have access to the store. So we wrap the provider around the app component and then add the data that we need to the tree's context. The descendants of the component then have access to the data.

The connect() Method

Now that we've provided the store to our application, we need to connect React to the store. The only way that you can communicate with the store is by dispatching actions and by retrieving the state. We've previously used store.dispatch() to dispatch actions and store.getState() to retrieve the latest snapshot of the state. The connect() lets you do exactly this, but with the help of two methods known as mapDispatchToProps and mapStateToProps. I have demonstrated this concept in the example below:

mapStateToProps and mapDispatchToProps both return an object, and the key of this object becomes a prop of the connected component. For instance, state.contacts.newContact is mapped to props.newContact. The action creator addContact() is mapped to props.addContact.

But for this to work, you need the last line in the code snippet above.

Connect React Containers to Redux

The connect function is used to bind React containers to Redux. What that means is that you can use the connect feature to:

subscribe to the store and map its state to your props

dispatch actions and map the dispatch callbacks into your props

Once you've connected your application to Redux, you can use this.props to access the current state and also to dispatch actions. I am going to demonstrate the process on the AddContact component. AddContact needs to dispatch three actions and get the state of two properties from the store. Let's have a look at the code.

mapStateToProps receives the state of the store as an argument. It returns an object that describes how the state of the store is mapped into your props. mapDispatchToProps returns a similar object that describes how the dispatch actions are mapped to your props.

Finally, we use connect to bind the AddContact component to the two functions as follows:

We've gone through the same procedure that we followed above to connect the Contacts component with the Redux store. The mapStateToProps function maps the store object to the contactList props. We then use connect to bind the props value to the Contact component. The second argument to the connect is null because we don't have any actions to be dispatched. That completes the integration of our app with the state of the Redux store.

What Next?

In the next post, we'll take a deeper look at middleware and start dispatching actions that involve fetching data from the server. Share your thoughts in the comments!

]]>2018-05-31T19:01:08+00:00//www.4elements.com/blog/read/getting-started-with-redux-learn-by-example
https://www.4elements.com/blog/read/getting-started-with-redux-learn-by-example#When:16:00:00ZRedux helps you manage state by setting the state up at a global level. In the previous tutorial, we had a good look at the Redux architecture and the integral components of Redux such as actions, action creators, the store, and reducers.

In this second post of the series, we are going to bolster our understanding of Redux and build on top of what we already know. We will start by creating a realistic Redux application—a contact list—that's more complex than a basic counter. This will help you strengthen your understanding of the single store and multiple reducers concept which I introduced in the previous tutorial. Then later we'll talk about binding your Redux state with a React application and the best practices that you should consider while creating a project from scratch.

However, it's okay if you haven't read the first post—you should still be able to follow along as long as you know the Redux basics. The code for the tutorial is available in the repo, and you can use that as a starting point.

Creating a Contact List Using Redux

We're going to build a basic contact list with the following features:

display all contacts

search for contacts

fetch all contacts from the server

add a new contact

push the new contact data into the server

Here's what our application is going to look like:

Final product — Contact list View

Final Product — Add contact view

Covering everything in one stretch is hard. So in this post we're going to focus on just the Redux part of adding a new contact and displaying the newly added contact. From a Redux perspective, we'll be initializing the state, creating the store, adding reducers and actions, etc.

In the next tutorial, we'll learn how to connect React and Redux and dispatch Redux actions from a React front-end. In the final part, we'll shift our focus towards making API calls using Redux. This includes fetching the contacts from the server and making a server request while adding new contacts. Apart from that, we'll also create a search bar feature that lets you search all the existing contacts.

Create a Sketch of the State Tree

You can download the react-redux demo application from my GitHub repository. Clone the repo and use the v1 branch as a starting point. The v1 branch is very similar to the create-react-app template. The only difference is that I've added a few empty directories to organise Redux. Here's the directory structure.

Our store needs to have two properties—contacts and ui. The contacts property takes care of all contacts-related state, whereas the ui handles UI-specific state. There is no hard rule in Redux that prevents you from placing the ui object as a sub-state of contacts. Feel free to organize your state in a way that feels meaningful to your application.

The contacts property has two properties nested inside it—contactlist and newContact. The contactlist is an array of contacts, whereas newContact temporarily stores contact details while the contact form is being filled. I am going to use this as a starting point for building our awesome contact list app.

How to Organize Redux

Redux doesn't have an opinion about how you structure your application. There are a few popular patterns out there, and in this tutorial, I will briefly talk about some of them. But you should pick one pattern and stick with it until you fully understand how all the pieces are connected together.

The most common pattern that you'll find is the Rails-style file and folder structure. You'll have several top-level directories like the ones below:

components: A place to store the dumb React components. These components do not care whether you're using Redux or not.

containers: A directory for the smart React components that dispatch actions to the Redux store. The binding between redux and react will be taking place here.

actions: The action creators will go inside this directory.

reducers: Each reducer gets an individual file, and you'll be placing all the reducer logic in this directory.

store: The logic for initializing the state and configuring the store will go here.

The image below demonstrates how our application might look if we follow this pattern:

The Rails style should work for small and mid-sized applications. However, when your app grows, you can consider moving towards the domain-style approach or other popular alternatives that are closely related to domain-style. Here, each feature will have a directory of its own, and everything related to that feature (domain) will be inside it. The image below compares the two approaches, Rails-style on the left and domain-style on the right.

For now, go ahead and create directories for components, containers, store, reducers, and action. Let's start with the store.

Single Store, Multiple Reducers

Let's create a prototype forthe store and the reducer first. From our previous example, this is how our store would look:

The switch statement has three cases that correspond to three actions that we will be creating. Here is a brief explanation of what the actions are meant for.

HANDLE_INPUT_CHANGE: This action gets triggered when the user inputs new values into the contact form.

ADD_NEW_CONTACT: This action gets dispatched when the user submits the form.

TOGGLE_CONTACT_FORM: This is a UI action that takes care of showing/hiding the contact form.

Although this naive approach works, as the application grows, using this technique will have a few shortcomings.

We're using a single reducer. Although a single reducer sounds okay for now, imagine having all your business logic under one very large reducer.

The code above doesn't follow the Redux structure that we've discussed in the previous section.

To fix the single reducer issue, Redux has a method called combineReducers that lets you create multiple reducers and then combine them into a single reducing function. The combineReducers function enhances readability. So I am going to split the reducer into two—a contactsReducer and a uiReducer.

In the example above, createStore accepts an optional second argument which is the initial state. However, if we are going to split the reducers, we can move the whole initialState to a new file location, say reducers/initialState.js. We will then import a subset of initialState into each reducer file.

Splitting the Reducer

Let's restructure our code to fix both the issues. First, create a new file called store/createStore.js and add the following code:

When you're creating reducers, always keep the following in mind: a reducer needs to have a default value for its state, and it always needs to return something. If the reducer fails to follow this specification, you will get errors.

Since we've covered a lot of code, let's have a look at the changes that we've made with our approach:

The combineReducers call has been introduced to tie together the split reducers.

The state of the ui object will be handled by uiReducer and the state of the contacts by the contactsReducer.

To keep the reducers pure, spread operators have been used. The three dot syntax is part of the spread operator. If you're not comfortable with the spread syntax, you should consider using a library like Immutability.js.

The initial value is no longer specified as an optional argument to createStore. Instead, we've created a separate file for it called initialState.js. We're importing initialState and then setting the default state by doing state = initialState.ui.

Actions and Action Creators

Let's add a couple of actions and action creators for adding handling form changes, adding a new contact, and toggling the UI state. If you recall, action creators are just functions that return an action. Add the following code in actions/index.js.

Each action needs to return a type property. The type is like a key that determines which reducer gets invoked and how the state gets updated in response to that action. The payload is optional, and you can actually call it anything you want.

In our case, we've created three actions.

The TOGGLE_CONTACT_FORM doesn't need a payload because every time the action is triggered, the value of ui.isContactFormHidden gets toggled. Boolean-valued actions do not require a payload.

The HANDLE_INPUT_CHANGE action is triggered when the form value changes. So, for instance, imagine that the user is filling the email field. The action then receives "email" and "bob@example.com" as inputs, and the payload handed over to the reducer is an object that looks like this:

{
email: "bob@example.com"
}

The reducer uses this information to update the relevant properties of the newContact state.

Dispatching Actions and Subscribing to the Store

The next logical step is to dispatch the actions. Once the actions are dispatched, the state changes in response to that. To dispatch actions and to get the updated state tree, Redux offers certain store actions. They are:

dispatch(action): Dispatches an action that could potentially trigger a state change.

getState(): Returns the current state tree of your application.

subscriber(listener): A change listener that gets called every time an action is dispatched and some part of the state tree is changed.

Head to the index.js file and import the configureStore function and the three actions that we created earlier:

If everything is working right, you should see this in the developer console.

That's it! In the developer console, you can see the Redux store being logged, so you can see how it changes after each action.

Summary

We've created a bare-bones Redux application for our awesome contact list application. We learned about reducers, splitting reducers to make our app structure cleaner, and writing actions for mutating the store.

Towards the end of the post, we subscribed to the store using the store.subscribe() method. Technically, this isn't the best way to get things done if you're going to use React with Redux. There are more optimized ways to connect the react front-end with Redux. We'll cover those in the next tutorial.

This tutorial will teach you how to use Axios to fetch data and then how to manipulate it and eventually display it on your page with filtering functionality. You will learn how to use the map, filter and includes methods along the way. On top of that, you will be creating a Higher-Order Component (HOC) to handle the loading state of the fetched data from the API endpoint.

Let's start with a clean React app. I assume you use create-react-app, and the filenames will be in accordance with its outputs.

We only need to install the Axios module for this tutorial.

Go to your project directory through the terminal window and then type in npm install axios -save in order to install Axios for your project locally.

Fetching the Data

Let's add the Axios module to our application by importing it into our App.js file.

import axios from 'axios'

The Random User Generator API offers a bunch of options for creating various types of data. You can check the documentation for further information, but for this tutorial, we will keep it simple.

We want to fetch ten different users, and we only need the name, surname, and a unique ID, which is required for React when creating lists of elements. Also, to make the call a bit more specific, let's include the nationality option as an example.

Below is the API that we will make a call for.

Note that I didn't use the id option provided in the API due to the fact that it sometimes returns null for some users. So, just to make sure that there will be a unique value for each user, I included the registered option in the API.

https://randomuser.me/api/?results=10&inc=name,registered&nat=fr

You can copy and paste it into your browser and you will see the returned data in JSON format.

Now, the next thing is to make an API call through Axios.

First of all, let's create a state so that we can store the fetched data.

Inside our App component, add a class constructor and then create the state.

Here you see users and store states. One will be used for filtering purposes and will not be edited, and the other one will hold the filter results that will be shown in the DOM.

Now go ahead and create the componentDidMount() lifecycle hook.

Inside this lifecycle hook, we will fetch the data, and then by using the map method, we will create new intermediate data that we will use inside the setState method.

If you check the result of the API call in your browser, you will see that there are first and last key-value pairs inside the name object but no key-value pair for a full name. So we will be combining first and last to create a full name inside a new JavaScript Object. Note that JSON and JavaScript Object are different things, although they basically work the same way.

Here, we called the map method on json.data.results, which is an array, and then referred each element of the array as result (notice the singular/plural difference). Then, by using the key-value pair of each object inside the array, we created another object with name and id key-value pairs.

At the end, we used another then method in order to be able to refer to our new data. We referred it as newData, and then we just logged to the console to see if everything went as planned.

You should see a new array with objects having name and id pairs.

Storing the Data

Instead of logging the result to the console, we have to store it. In order to do that, we will use setState.

Here, we initially set both users and store data with our new newData array.

We need two variables due to the fact that we need to store the original data and should never lose it. By using the information inside the store state, we can filter the data and then populate the users state and show it on the page. This will be clearer when we implement the filtering functionality.

Last but not least, we added catch to actually catch any possible errors during fetching and display the error as an alert message.

Filtering Functionality

The idea of filtering is quite simple. We have our store state, and it always keeps the original data without changing. Then, by using the filter function on this state, we only get the matching elements and then assign them to the users state.

The filter method requires a function as an argument, a function to be run for each element in the array. Here we refer each element inside the array as item. Then we take the name key of each item and convert it to lower case in order to make our filtering functionality case insensitive.

After we have the name key for the item, we check if this one includes the search string we typed in. includes is another built-in JavaScript method. We pass the search string typed in the input field as an argument to includes, and it returns if this string is included in the variable it was called on. Again, we convert the input string to lower case so that it does not matter whether you type upper or lower case inputs.

In the end, the filter method returns the matching elements. So we simply take these elements and store them inside the users state through setState.

Here, we again used the map method to get each item in the array and create a <li> item out of it. Note that when you use map to create a list of items, you need to use a key in order for React to keep track of each list item.

Notice that we wrapped List with another component named LoadingHOC before exporting it. This is how Higher-Order Components (HOCs) work.

What we did here is to pass our component as an argument to another component before exporting it. So this LoadingHOC component will be enhancing our component with new features.

The LoadingHOC Component

As I briefly explained before, a HOC takes a component as an input and then exports an enhanced version of the input component.

Inside the HOC, we can directly access the props of the input component. So we just check whether the length of the usernames prop is 0 or not. If it is 0, this means that the data has yet to be fetched because it is an empty array by default. So we just show a spinner GIF that we imported. Otherwise, we just show the input component itself.

It's imported in order not to forget to pass any props and states back to the input component with a spread operator. Otherwise, your component would be deprived of them.

Conclusion

Throughout this tutorial, we took a quick look at the Random User Generator API as a source of random data. Then we fetched the data from an API endpoint and restructured the results inside a new JavaScript Object with the map method.

The next thing was to create a filtering function with the filter and includes methods. Finally, we created two different components and enhanced one of them with a Higher-Order Component (HOC) by introducing a loading indicator when the data is not there yet.

Over the last couple of years, React has grown in popularity. In fact, we have a number of items in Envato Market that are available for purchase, review, implementation, and so on. If you’re looking for additional resources around React, don’t hesitate to check them out.

]]>2018-05-28T12:37:45+00:00//www.4elements.com/blog/read/a-beginners-guide-to-regular-expressions-in-javascript
https://www.4elements.com/blog/read/a-beginners-guide-to-regular-expressions-in-javascript#When:13:00:00ZEveryone working with JavaScript will have to deal with strings at one point or other. Sometimes, you will just have to store a string inside another variable and then pass it over. Other times, you will have to inspect it and see if it contains a particular substring.

However, things are not always this easy. There will be times when you will not be looking for a particular substring but a set of substrings which follow a certain pattern.

Let's say you have to replace all occurrences of "Apples" in a string with "apples". You could simply use theMainString.replace("Apples", "apples"). Nice and easy.

Now let's say you have to replace "appLes" with "apples" as well. Similarly, "appLES" should become "apples" too. Basically, all case variations of "Apple" need to be changed to "apple". Passing simple strings as an argument will no longer be practical or efficient in such cases.

This is where regular expressions come in—you could simply use the case-insensitive flag i and be done with it. With the flag in place, it doesn't matter if the original string contained "Apples", "APPles", "ApPlEs", or "Apples". Every instance of the word will be replaced with "apples".

Just like the case-insensitive flag, regular expressions offer a lot of other features which will be covered in this tutorial.

Using Regular Expressions in JavaScript

You have to use a slightly different syntax to indicate a regular expression inside different String methods. Unlike a simple string, which is enclosed in quotes, a regular expression consists of a pattern enclosed between slashes. Any flags that you use in a regular expression will be appended after the second slash.

Going back to the previous example, here is what the replace() method would look like with a regular expression and a simple string.

As you can see, the regular expression worked in both cases. We will now learn more about flags and special characters that make up the pattern inside a regular expression.

Backslash in Regular Expressions

You can turn normal characters into special characters by adding a backslash before them. Similarly, you can turn special characters into normal characters by adding a backslash before them.

For example, d is not a special character. However, \d is used to match a digit character in a string. Similarly, D is not a special character either, but \D is used to match non-digit characters in a string.

Digit characters include 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. When you use \d inside a regular expression, it will match any of these nine characters. When you use \D inside a regular expression, it will match all the non-digit characters.

You should note that only the first matched character is replaced in the third case. You can also use flags to replace all the matches. We will learn about such flags later.

Just like \d and \D, there are other special character sequences as well.

You can use \w to match any "word" character in a string. Here, word character refers to A-Z, a-z, 0-9, and _. So, basically, it will match all digits, all lowercase and uppercase alphabets, and the underscore.

You can use \W to match any non-word character in a string. It will match characters like %, $, #, ₹, etc.

You can use \s to match a single white space character, which includes space, tab, form feed, and line feed. Similarly, you can use \S to match all other characters besides white space.

You can also look for a specific white space character using \f, \n, \r, \t, and \v, which stand for form feed, line feed, carriage return, horizontal tab, and vertical tab.

Sometimes, you will face situations where you need to replace a word with its substitute, but only if it is notpart of a larger word. For example, consider the following sentence:

"A lot of pineapple images were posted on the app".

In this case, we want to replace the word "app" with "board". However, using a simple regular expression pattern will turn "apple" into "boardle", and the final sentence would become:

"A lot of pineboardle images were posted on the app".

In such cases, you can use another special character sequence: \b. This checks for word boundaries. A word boundary is formed by use of any non-word characters like space, "$", "%", "#", etc. Watch out, though—it also includes accented characters like "ü".

"A lot of pineapple images were posted on the app".replace(/app/, "board");
// A lot of pineboardle images were posted on the app
"A lot of pineapple images were posted on the app".replace(/\bapp/, "board");
// A lot of pineapple images were posted on the board

Similarly, you can use\B to match a non-word boundary. For example, you could use \B to only match "app" when it is within another word, like "pineapple".

Matching a Pattern "n" Number of Times

You can use ^ to tell JavaScript to only look at the beginning of the string for a match. Similarly, you can use $ to only look at the end of the string for a match.

You can use *to match the preceding expression 0 or more times. For example, /Ap*/ will match A, Ap, App, Appp, and so on.

In a similar manner, you can use + to match the preceding expression 1 or more times. For example, /Ap+/ will match Ap, App, Appp, and so on. The expression will not match the single A this time.

Sometimes, you only want to match a specific number of occurrences of a given pattern. In such cases, you should use the {n} character sequence, where n is a number. For instance, /Ap{2}/ will match App but not Ap. It will also match the first two 'p's in Appp and leave the third one untouched.

You can use {n,} to match at least 'n' occurrences of a given expression. This means that /Ap{2,}/ will match App but not Ap. It will also match all the 'p's in Apppp and replace them with your replacement string.

You can also use {n,m} to specify a minimum and maximum number and limit the number of times the given expression should be matched. For example, /Ap{2,4}/ will match App, Appp, and Apppp. It will also match the first four 'p's in Apppppp and leave the rest of them untouched.

Using Parentheses to Remember Matches

So far, we have only replaced patterns with a constant string. For example, in the previous section, the replacement we used was always "Add". Sometimes, you will have to look for a pattern match inside the given string and then replace it with a part of the pattern.

Let's say you have to find a word with five or more letters in a string and then add an "s" at the end of the word. In such cases, you will not be able to use a constant string value as a replacement as the final value depends on the matching pattern itself.

"I like Apple".replace(/(\w{5,})/, '$1s');
// I like Apples
"I like Banana".replace(/(\w{5,})/, '$1s');
// I like Bananas

This was a simple example, but you can use the same technique to keep more than one matching pattern in memory. The number of sub-patterns in the full match will be determined by the number of parentheses used.

Inside the replacement string, the first sub-match will be identified using $1, the second sub-match will be identified using $2, and so on. Here is another example to further clarify the usage of parentheses.

"I am looking for John and Jason".replace(/(\w+)\sand\s(\w+)/, '$2 and $1');
// I am looking for Jason and John

Using Flags With Regular Expressions

As I mentioned in the introduction, one more important feature of regular expressions is the use of special flags to modify how a search is performed. The flags are optional, but you can use them to do things like making a search global or case-insensitive.

These are the four commonly used flags to change how JavaScript searches or replaces a string.

g: This flag will perform a global search instead of stopping after the first match.

i: This flag will perform a search without checking for an exact case match. For instance, Apple, aPPLe, and apPLE are all treated the same during case-insensitive searches.

m: This flag will perform a multi-line search.

y: This flag will look for a match in the index indicated by the lastIndex property.

Final Thoughts

The purpose of this tutorial was to introduce you to regular expressions in JavaScript and their importance. We began with the basics and then covered backslash and other special characters. We also learned how to check for a repeating pattern in a string and how to remember partial matches in a pattern in order to use them later.

Finally, we learned about commonly used flags which make regular expressions even more powerful. You can learn more about regular expressions in this article on MDN.

If there is anything that you would like me to clarify in this tutorial, feel free to let me know in the comments.

]]>2018-05-25T13:00:00+00:00//www.4elements.com/blog/read/introduction-to-popmotion-custom-animation-scrubber
https://www.4elements.com/blog/read/introduction-to-popmotion-custom-animation-scrubber#When:12:32:56ZIn the first part of the Popmotion introductory series, we learned how to use time-based animations like tween and keyframes. We also learned how to use those animations on the DOM, using the performant styler.

In part two, we learned how to use pointer tracking and record velocity. We then used that to power the velocity-based animations spring, decay, and physics.

In this final part, we're going to be creating a scrubber widget, and we're going to use it to scrub a keyframes animation. We'll make the widget itself from a combination of pointer tracking as well as spring and decay to give it a more visceral feel than run-of-the-mill scrubbers.

Try it for yourself:

Getting Started

Markup

First, fork this CodePen for the HTML template. As before, because this is an intermediate tutorial, I won't go through everything.

The main twist of note is that the handle on the scrubber is made up of two div elements: .handle and .handle-hit-area.

.handle is the round blue visual indicator of where the scrubber handle is. We've wrapped it in an invisible hit area element to make grabbing the element easier for touchscreen users.

Import Functions

At the top of your JS panel, import everything we're going to use in this tutorial:

Keyframes Animation

For our scrubbable animation, we're going to make the .box move from left to right with keyframes. However, we could just as easily scrub a tween or timeline animation using the same method outlined later in this tutorial.

Your animation will now be playing. But we don't want that! Let's pause it for now:

boxAnimation.pause();

Dragging the x-axis

It's time to use pointer to drag our scrubber handle. In the previous tutorial, we used both x and y properties, but with a scrubber we only need x.

We prefer to keep our code reusable, and tracking a single pointer axis is quite a common use case. So let's create a new function called, imaginatively, pointerX.

It will work exactly like pointer except it'll take just a single number as its argument and output just a single number (x):

const pointerX = (x) => pointer({ x }).pipe(xy => xy.x);

Here, you can see we're using a method of pointer called pipe. pipe is available on all the Popmotion actions we've seen so far, including keyframes.

pipe accepts multiple functions. When the action is started, all output will be passed through each of these functions in turn, before the update function provided to start fires.

In this case, our function is simply:

xy => xy.x

All it is doing is taking the { x, y } object usually output by pointer and returning just the x axis.

Event Listeners

We need to know if the user has started pressing the handle before we start tracking with our new pointerX function.

In the last tutorial we used the traditional addEventListener function. This time, we're going to use another Popmotion function called listen. listen also provides a pipe method, as well as access to all action methods, but we're not going to use that here.

listen allows us to add event listeners to multiple events with a single function, similar to jQuery. So we can condense the previous four event listeners to two:

Right now, the handle can be scrubbed beyond the boundaries of the slider, but we'll come back to this later.

Scrubbing

Now we have a visually functional scrubber, but we're not scrubbing the actual animation.

Every value has a subscribe method. This allows us to attach multiple subscribers to fire when the value changes. We want to seek the keyframes animation whenever handleX updates.

First, measure the slider. On the line after we define range, add:

const rangeWidth = range.getBoundingClientRect().width;

keyframes.seek accepts a progress value as expressed from 0 to 1, whereas our handleX is set with pixel values from 0 to rangeWidth.

We can convert from the pixel measurement to a 0 to 1 range by dividing the current pixel measurement by rangeWidth. On the line after boxAnimation.pause(), add this subscribe method:

handleX.subscribe(v => boxAnimation.seek(v / rangeWidth));

Now, if you play with the scrubber, the animation will scrub successfully!

The Extra Mile

Spring Boundaries

The scrubber can still be pulled outside the boundaries of the full range. To solve this, we could simply use a clamp function to ensure we don't output values outside of 0, rangeWidth.

Instead, we're going to go the extra step and attach springs to the end of our slider. When a user pulls the handle beyond the permitted range, it will tug back towards it. If the user releases the handle while it's outside the range, we can use a spring animation to snap it back.

We'll make this process a single function that we can provide to the pointerXpipe method. By creating a single, reusable function, we can reuse this piece of code with any Popmotion animation, with configurable ranges and spring strengths.

First, let's apply a spring to the left-most limit. We'll use two transformers, conditional and linearSpring.

conditional takes two functions, an assertion and a transformer. The assertion receives the provided value and returns either true or false. If it returns true, the second function will be provided the value to transform and return.

In this case, the assertion is saying, "If the provided value is smaller than min, pass this value through the linearSpring transformer." The linearSpring is a simple spring function that, unlike the physics or spring animations, has no concept of time. Provide it a strength and a target, and it will create a function that "attracts" any given value towards the target with the defined strength.

Another benefit of composing a function like springRange is that it becomes very testable. The function it returns is, like all transformers, a pure function that takes a single value. You can test this function to see if it passes through values that lie within min and max unaltered, and if it applies springs to values that lie without.

If you let go of the handle while it lies outside the range, it should now spring back to within range. For that, we'll need to adjust the stopDrag function to fire a spring animation:

You can see that to is set either as 0 or rangeWidth depending on which side of the slider the handle currently sits. By playing with damping and stiffness, you can play with a range of different spring-feels.

Momentum Scrolling

A nice touch on iOS scrubber that I always appreciated was that if you threw the handle, it would gradually slow down rather than come to a dead stop. We can replicate that easily using the decay animation.

In stopDrag, replace handleX.stop() with momentumScroll(x).

Then, on the line after the snapHandleToEnd function, add a new function called momentumScroll:

Conclusion

Using a combination of different Popmotion functions, we can create a scrubber that has a bit more life and playfulness than the usual.

By using pipe, we compose simple pure functions into more complex behaviours while leaving the composite pieces testable and reusable.

Next Steps

How about trying these challenges:

Make the momentum scroll end with a bounce if the handle hits either end of the scrubber.

Make the handle animate to any point on the scrubber when a user clicks on another part of the range bar.

Add full play controls, like a play/pause button. Update the scrubber handle position as the animation progresses.

]]>2018-05-25T12:32:56+00:00//www.4elements.com/blog/read/introduction-to-popmotion-pointers-and-physics
https://www.4elements.com/blog/read/introduction-to-popmotion-pointers-and-physics#When:12:32:56ZWelcome back to the Introduction to Popmotion tutorial series. In part 1, we discovered how to use tweens and keyframes to make precise, time-scheduled animations.

In Part 2, we're going to look at pointer tracking and velocity-based animations.

Velocity-based animations are different to a time-based animation like tween in that the primary property that affects how the animation behaves is velocity. The animation itself might take any amount of time.

We'll look at the three velocity-based animations in Popmotion, spring, decay, and physics. We'll use the velocity of the pointer tracking animation to start these animations, and that'll demonstrate how velocity-based animations can create engaging and playful UIs in a way that time-based animations simply can't.

We provide it with the ball's current position, its velocity, and a target, and the simulation is run. It changes depending on how the user has thrown the ball.

The cool thing about springs is they're expressive. By adjusting the mass, stiffness and damping properties, you can end up with radically different spring-feels.

For instance, if you only change the stiffness above to 1000, you can create a motion that feels like high-energy snapping. Then, by changing mass to 20, you create motion that looks almost like gravity.

There's a combination that will feel right and satisfying for your users, and appropriate to your brand, under almost any circumstance. By playing with different spring-feels, you can communicate different feelings, like a strict out-of-bounds snap or a softer affirmative bounce.

decay

The decay animation, as the name suggests, decays the provided velocity so that the animation gradually slows to a complete stop.

This can be used to create the momentum scrolling effect found on smartphones, like this:

decay automatically calculates a new target based on the provided from and velocity props.

It's possible to adjust the feel of the deceleration by messing with the props outlined in the docs linked above but, unlike spring and physics, decay is designed to work out of the box.

physics

Finally, we have the physics animation. This is Popmotion's Swiss Army knife of velocity-based animations. With it, you can simulate:

constant velocity

acceleration

springs

friction

spring and decay offer super-precise motion and a wider variety of "feels". Soon, they'll both also be scrubbable.

But both are immutable. Once you've started either, their properties are set in stone. Perfect for when we want to start an animation based on the initial from/velocity state, but not so good if we want ongoing interaction.

physics, instead, is an integrated simulation closer to that of a video game. It works by, once per frame, taking the current state and then modifying it based on the current properties at that point in time.

This allows it to be mutable, which means we can change those properties, which then changes the outcome of the simulation.

To demonstrate this, let's make a twist on classic pointer smoothing, with elastic smoothing.

Import physics:

const { pointer, spring, physics, styler, value } = popmotion;

This time, we're going to change the startTracking function. Instead of changing ballXY with pointer, we'll use physics:

Here, we're setting from and velocity as normal. friction and springStrength both adjust the properties of the spring.

restSpeed: false overrides the default behaviour of the animation stopping when motion stops. We want to stop it manually in stopTracking.

On its own, this animation won't do anything because we set to, the spring's target, to the same as from. So let's reimplement the pointer tracking this time to change the spring target of physics. On the last line of startTracking, add:

Conclusion

spring can be used to create a wide-variety of spring-feels, while decay is specifically tailored for momentum scroll animations. physics is more limited than either in terms of configurability, but also provides the opportunity to change the simulation in progress, opening new interaction possibilities.

In the next and final part of this introductory series on Popmotion, we're going to take everything we've learned in the first two parts and use them along with some light functional composition to create a scrubbable animation, along with a scrubber to do the scrubbing with!

In our new course, Connect to a Database With Laravel's Eloquent ORM, you'll learn all about Eloquent, which makes it easy to connect to relational data in a database and work with it using object-oriented models in your Laravel app. It is simple to set up, easy to use, and packs a lot of power.

What You’ll Learn

In this course, Envato Tuts+ instructor Jeremy McPeak will teach you how to use Eloquent, Laravel's object-relational mapper (ORM).

Follow along as Jeremy builds the data back-end for a simple guitar database app. You'll learn how to create data tables with migrations, how to create data models, and how to use Eloquent for querying and mutating data.

Watch the Introduction

Take the Course

You can take our new course straight away with a subscription to Envato Elements. For a single low monthly fee, you get access not only to this course, but also to our growing library of over 1,000 video courses and industry-leading eBooks on Envato Tuts+.

Plus you now get unlimited downloads from the huge Envato Elements library of 580,000+ creative assets. Create with unique fonts, photos, graphics and templates, and deliver better projects faster.

In today's article
I'm going to demonstrate how to make a web application that will display live
game scores from the NHL. The scores will update automatically as the games
progress.

This is a very
exciting article for me, as it allows me the chance to bring two of my favorite
passions together: development and sports.

The technologies
that will be used to create the application are:

Node.js

Socket.io

MySportsFeed.com

If you don't have
Node.js installed, visit their download
page now and set it up before continuing.

What Is Socket.io?

Socket.io is a
technology that connects a client to a server. In this example, the client is a
web browser and the server is the Node.js application. The server can have
multiple clients connected to it at any given time.

Once the connection
has been established, the server can send messages to all of the clients or an
individual client. In return, the client can send a message to the server, allowing for bi-directional real-time communication.

Before Socket.io,
web applications would commonly use AJAX, and both the client and server would
poll each other looking for events. For example, every 10 seconds an AJAX call
would occur to see if there were any messages to handle.

Polling for messages
caused a significant amount of overhead on both the client and server as it
would be constantly looking for messages when there were none.

With Socket.io, messages are received instantaneously, without needing to look for messages, reducing the overhead.

Sample
Socket.io Application

Before we consume
the real-time sports data, let's create an example application to demonstrate
how Socket.io works.

To begin, I am going
to create a new Node.js application. In a console window, I am going to
navigate to C:\GitHub\NodeJS, create a new folder for my application, and
create a new application:

cd \GitHub\NodeJS
mkdir SocketExample
cd SocketExample
npm init

I used all the
default settings.

Because we are
making a web application, I'm going use an NPM package called Express to
simplify the setup. In a command prompt, install it as follows: npm install express
--save

And of course we
will need to install the Socket.io package: npm install
socket.io --save

Let's begin by
creating the web server. Create a new file called index.js and place the
following code within it to create the web server using Express:

If you are not
familiar with Express, the above code example includes the Express library and
creates a new HTTP server. In this example, the HTTP server is listening on
port 3000, e.g. http://localhost:3000. A
route is created at the root of the site "/". The result of the route
returns an HTML file: index.html.

Before we create the
index.html file, let's finish the server by setting up Socket.io. Append the
following to your index.js file to create the Socket server:

Similar to Express,
the code begins by importing the Socket.io library. This is stored in a
variable called io. Next, using the io variable, an event handler is created with
the on function. The event being
listened for is connection. This event
is called each time a client connects to the server.

Let's now create our
very basic client. Create a new file called index.html and place the following
code within:

The HTML above loads
the Socket.io client JavaScript and initializes a connection to the server. To
see the example, start your Node application: node index.js

Then, in your browser, navigate to http://localhost:3000. Nothing
will appear on the page; however, if you look at the console where the Node
application is running, two messages are logged:

HTTP server started on port
3000

Client connection received

Now that we have a
successful socket connection, let's put it to use. Let's begin by sending a
message from the server to the client. Then, when the client receives the
message, it can send a response back to the server.

The previous io.on function has been updated to include a
few new lines of code. The first, socket.emit, sends the message to the client. The sendToClient
is the name of the event. By naming events, you can send different types of
messages so the client can interpret them differently. The second addition is
the socket.on, which also contains an
event name: receivedFromClient. This
creates a function that accepts data from the client. In this case, the data is
logged to the console window.

That completes the
server-side amendments; it can now send and receive data from any connected
clients.

Let's complete this
example by updating the client to receive the sendToClient
event. When it receives the event, it can respond with the receivedFromClient event back to the server.

This is accomplished
in the JavaScript portion of the HTML, so in the index.html
file, I have updated the JavaScript as follows:

Using the
instantiated socket variable, we have
very similar logic on the server with a socket.on
function. For the client, it is listening to the sendToClient
event. As soon as the client is connected, the server sends this message. When
the client receives it, it is logged to the console in the browser. The client
then uses the same socket.emit that the
server used to send the original event. In this instance, the client sends back
the receivedFromClient event to the
server. When the server receives the message, it is logged to the console
window.

Try it out for
yourself. First, in a console, run your Node application: node index.js. Then load http://localhost:3000 in your browser.

Check the web
browser console and you should see the following JSON data logged: {hello:
"world"}

Then, in the command
prompt where the Node application is running, you should see the following:

Both the client and
server can use the JSON data received to perform specific tasks. We will learn
more about that once we connect to the real-time sports data.

Sports
Data

Now that we have
mastered how to send and receive data to and from the client and server, this
can be leveraged to provide real-time updates. I chose to use sports data,
although the same theory is not limited to sports. Before I began this project,
I researched different sports data. The one I settled on, because they offer
free developer accounts, was MySportsFeeds (I am not affiliated with them in any way). To access the real-time
data, I signed up for an account and then made a small donation. Donations start at
$1 to have data updated every 10 minutes. This will be good for the example.

Once your account is
set up, you can proceed to setting up access to their API. To assist with this,
I am going to use their NPM package: npm install
mysportsfeeds-node --save

After the package
has been installed, API calls can be made as follows:

In the example
above, be sure to replace the call to the authenticate
function with your username and password.

The following code
executes an API call to the get the NHL scoreboard for today. The fordate variable is what specifies today. I've
also set force to true so that a response is always returned, even when the data has not changed.

With the current
setup, the results of the API call get written to a text file. In the final
example, this will be changed; however, for demonstration purposes, the results
file can be reviewed in a text editor to understand the contents of the
response. The results contain a scoreboard
object. This object contains an array called gameScore.
This object stores the result of each game. Each object contains a child object
called game. This object provides the
information about who is playing.

Outside of the game
object, there are a handful of variables that provide the current state of the
game. The data changes based on the state of the game. For example, when the
game hasn't started, there are only a few variables that tell us the game is not
in progress and has not started.

When the game is in
progress, additional data is provided about the score, what period the game is
in, and how much time is remaining. We will see this in action when we get to
the HTML to show the game in the next section.

Real-Time Updates

We have all the
pieces to the puzzle, so it is now time to put the puzzle together to reveal the
final picture. Currently, MySportsFeeds has limited support for pushing data to
us, so we will have to poll the data from them. Luckily, we know the data only
changes once every 10 minutes, so we don't need to add overhead by polling for
changes too frequently. Once we poll the data from them, we can push those
updates from the server to all clients connected.

To perform the
polling, I will use the JavaScript setInterval
function to call the API (in my case) every 10 minutes to look for updates.
When the data is received, an event is sent to all of the connected clients.
When the clients receive the event, the game scores will be updated with
JavaScript in the web browser.

MySportsFeeds will
also be called when the Node application first starts up. This data will be
used for any clients who connect before the first 10-minute interval. This is
stored in a global variable. This same global variable is updated as part of
the interval polling. This will ensure that when any new clients connect after
the polling, they will have the latest data.

To assist with some
code cleanliness in the main index.js
file, I have created a new file called data.js.
This file will contain a function that is exported (available in the index.js file) that performs the previous call
to the MySportsFeeds API. Here are the full contents of that file:

The first seven lines of
code above instantiate the required libraries and the global latestData variable. The final list of
libraries used are: Express, Http Server created with Express, Socket.io, and
the aforementioned data.js file just
created.

With the necessities
taken care of, the application populates the latestData
for clients who will connect when the server is first started:

The next few lines
set up a route for the root page of the website (http://localhost:3000/) and start the HTTP
server to listen on port 3000.

Next, the Socket.io
is set up to look for connections. When a new connection is received, the server
emits an event called data with the
contents of the latestData variable.

And finally, the
final chunk of code creates the polling interval. When the interval occurs, the
latestData variable is updated with the
results of the API call. This data then emits the same data event to all clients.

You may notice that
when the client connects and an event is emitted, it is emitting the event with
the socket variable. This approach will
send the event to that connected client only. Inside the interval, the global io is used to emit the event. This will send
the event to all clients.

That completes the
server. Let's work on the client front-end. In an earlier example, I created a basic index.html file that
set up the client connection that would log events from the server and send one
back. I am going to extend that file to contain the completed example.

Because the server
is sending us a JSON object, I am going to use jQuery and leverage a jQuery
extension called JsRender.
This is a templating library. It will allow me to create a template with HTML
that will be used to display the contents of each NHL game in an easy-to-use, consistent manner. In a moment, you will see the power of this library. The
final code is over 40 lines of code, so I am going to break it down into
smaller chunks, and then display the full HTML together at the end.

This first part
creates the template that will be used to show the game data:

The template is
defined using a script tag. It contains
the id of the template and a special script type called text/x-jsrender. The
template defines a container div for
each game that contains a class game to
apply some basic styling. Inside this div, the templating begins.

In the next div, the
away and home team are displayed. This is done by concatenating the city and
team name together from the game object
from the MySportsFeed data.

{{:game.awayTeam.City}} is
how I define an object that will be replaced with a physical value when the
template is rendered. This syntax is defined by the JsRender library.

Once the teams are
displayed, the next chunk of code does some conditional logic. When the game is
unPlayed, a string will be outputted
that the game will start at {{:game.time}}.

When the game is not
completed, the current score is displayed: Current Score: {{:awayScore}} -
{{:homeScore}}. And finally, some tricky little logic to identify what period
the hockey game is in or if it is in intermission.

If the variable currentIntermission is provided in the
results, then I use a function I defined called ordinal_suffix_of, which will convert the period number to read: 1st (2nd, 3rd, etc.)
Intermission.

When it is not in
intermission, I look for the currentPeriod
value. This also uses the ordinal_suffix_of to show that the game is in the 1st (2nd, 3rd,
etc.) period.

Beneath this,
another function I defined called time_left
is used to convert the number of seconds remaining into the number of
minutes and seconds remaining in the period. For example: 10:12.

The final part of
the code displays the final score because we know the game has completed.

Here is an example
of what it looks like when there is a mix of finished games, in progress games,
and games that have not started yet (I'm not a very good designer, so it looks
as you would expect when a developer makes their own User Interface).

Next up is a chunk
of JavaScript that creates the socket, the helper functions ordinal_suffix_of
and time_left, and a variable that references the jQuery template created.

I have a placeholder
div with the id of data. The result of
the template rendering (tmpl.render) writes the HTML to this container. What is
really neat is that the JsRender library can accept an array of data, in this
case data.scoreboard.gameScore, that
iterates through each element in the array and creates one game per element.

Every X minutes, the
server will send an event to the client. The client will redraw the game
elements with the updated data. So when you leave the site open and
periodically look at it, you will see the game data refresh when games are
currently in progress.

Conclusion

The final product
uses Socket.io to create a server that clients connect to. The server fetches
data and sends it to the client. When the client receives the data, it can
seamlessly update the display. This reduces load on the server because the
client only performs work when it receives an event from the server.

Sockets are not
limited to one direction; the client can also send messages to the server. When
the server receives the message, it can perform some processing.

Chat applications
would commonly work this way. The server would receive a message from the
client and then broadcast to all connected clients to show that someone has
sent a new message.

Hopefully you
enjoyed this article as I had a blast creating this real-time sports
application for one of my favorite sports!

One of the reasons for WooCommerce's popularity is its extendability. Like WordPress itself, WooCommerce is packed full of actions and filters that developers can hook into if they want to extend WooCommerce's default functionality.

A great example of this is the ability to create a custom data panel.

What's Covered in This Tutorial?

This tutorial is split into two parts. In part one, we're going to be looking at:

adding a custom panel to WooCommerce

adding custom fields to the panel

sanitizing and saving custom field values

Then in part two, we'll look at:

displaying custom fields on the product page

changing the product price depending on the value of custom fields

displaying custom field values in the cart and order

What Is a WooCommerce Custom Data Panel?

When you create a new product in WooCommerce, you enter most of the critical product information, like price and inventory, in the Product data section.

In the screenshot above, you can see that the Product data section is divided into panels: the tabs down the left, e.g. General, Inventory, etc., each open different panels in the main view on the right.

In this tutorial, we're going to look at creating a custom panel for product data and adding some custom fields to it. Then we'll look at using those custom fields on the front end and saving their values to customer orders.

In our example scenario, we're going to add a 'Giftwrap' panel which contains some custom fields:

a checkbox to include a giftwrapping option for the product on the product page

a checkbox to enable an input field where a customer can enter a message on the product page

an input field to add a price for the giftwrapping option; the price will be added to the product price in the cart

In the back end, it's going to look like this:

And on the front end, it will look something like this:

Create a New Plugin

Because we're extending functionality, we're going to create a plugin rather than adding our code to a theme. That means that our users will be able to retain this extra functionality even if they switch their site's theme. Creating a plugin is out of scope for this tutorial, but if you need some help, take a look at this Tuts+ Coffee Break Course on creating your first plugin:

Create the Custom Tab

To create the custom tab, we hook into the woocommerce_product_data_tabs filter using our create_giftwrap_tab function. This passes the WooCommerce $tabs object in, which we then modify using the following parameters:

label: use this to define the name of your tab.

target: this is used to create an anchor link so needs to be unique.

class: an array of classes that will be applied to your panel.

priority: define where you want your tab to appear.

Product Types

At this stage, it's worth considering what product types we'd like our panel to be enabled for. By default, there are four WooCommerce product types: simple, variable, grouped, and affiliate. Let's say for our example scenario, we only want our Giftwrap panel to be enabled for simple and variable product types.

To achieve this, we add the show_if_simple and show_if_variable classes to the class parameter above. If we didn't want to enable the panel for variable product types, we'd just omit the show_if_variable class.

Add Custom Fields

The next hook we use is woocommerce_product_data_panels. This action allows us to output our own markup for the Giftwrap panel. In our class, the function display_giftwrap_fields creates a couple of div wrappers, inside which we use some WooCommerce functions to create custom fields.

Note how the id attribute for our outer div, giftwrap_panel, matches the value we passed into the target parameter of our giftwrap tab above. This is how WooCommerce will know to display this panel when we click the Giftwrap tab.

WooCommerce Custom Field Functions

In our example, the two functions we're using to create our fields are:

woocommerce_wp_checkbox

woocommerce_wp_text_input

These functions are provided by WooCommerce specifically for the purpose of creating custom fields. They take an array of arguments, including:

id: this is the ID of your field. It needs to be unique, and we'll be referencing it later in our code.

label: this is the label as it will appear to the user.

desc_tip: this is the optional tool tip that appears when the user hovers over the question mark icon next to the label.

Note also that the woocommerce_wp_text_input function also takes a type argument, where you can specify number for a number input field, or text for a text input field. Our field will be used to input a price, so we specify it as number.

Save the Custom Fields

The final part of our admin class uses the woocommerce_process_product_meta action to save our custom field values.

In order to standardize and optimize how it stores and retrieves data, WooCommerce 3.0 adopted a CRUD (Create, Read, Update, Delete) method for setting and getting product data. You can find out more about the thinking behind this in the WooCommerce 3.0 announcement.

With this in mind, instead of the more familiar get_post_meta and update_post_meta methods that we might have used in the past, we now use the $post_id to create a WooCommerce $product object, and then apply the update_meta_data method to save data. For example:

When you activate your plugin on a site (along with WooCommerce) and then go to create a new product, you'll see your new Giftwrap panel available, along with custom fields. You can update the fields and save them... But you won't see anything on the front end yet.

Conclusion

Let's just recap what we've looked at so far in this article.

We've looked at an example scenario for adding a custom 'Giftwrap' panel to WooCommerce. We've created a plugin and added a class to create the panel. Within the class, we've also used WooCommerce functions to add custom fields, and then we've sanitized and saved those field values.

]]>2018-05-11T12:29:24+00:00//www.4elements.com/blog/read/how-laravel-broadcasting-works
https://www.4elements.com/blog/read/how-laravel-broadcasting-works#When:12:26:24ZToday, we are going to explore the concept of broadcasting in the Laravel web framework. It allows you to send notifications to the client side when something happens on the server side. In this article, we are going to use the third-party Pusher library to send notifications to the client side.

If you have ever wanted to send notifications from the server to the client when something happens on a server in Laravel, you're looking for the broadcasting feature.

For example, let's assume that you've implemented a messaging application that allows users of your system to send messages to each other. Now, when user A sends a message to user B, you want to notify user B in real time. You may display a popup or an alert box that informs user B about the new message!

It's the perfect use-case to walk through the concept of broadcasting in Laravel, and that's what we'll implement in this article.

If you are wondering how the server could send notifications to the client, it's using sockets under the hood to accomplish it. Let's understand the basic flow of sockets before we dive deeper into the actual implementation.

Firstly, you need a server that supports the web-sockets protocol and allows the client to establish a web socket connection.

You could implement your own server or use a third-party service like Pusher. We'll prefer the latter in this article.

The client initiates a web socket connection to the web socket server and receives a unique identifier upon successful connection.

Once the connection is successful, the client subscribes to certain channels at which it would like to receive events.

Finally, under the subscribed channel, the client registers events that it would like to listen to.

Now on the server side, when a particular event happens, we inform the web-socket server by providing it with the channel name and event name.

And finally, the web-socket server broadcasts that event to registered clients on that particular channel.

Don't worry if it looks like too much in a single go; you will get the hang of it as we move through this article.

Next, let's have a look at the default broadcast configuration file at config/broadcasting.php.

In this article, we are going to use the Pusher broadcast adapter. For debugging purposes, you could also use the log adapter. Of course, if you're using the log adapter, the client won't receive any event notifications, and it'll only be logged to the laravel.log file.

From the next section onward, we'll right away dive into the actual implementation of the aforementioned use-case.

Setting Up the Prerequisites

In broadcasting, there are different types of channels—public, private, and presence. When you want to broadcast your events publicly, it's the public channel that you are supposed to use. Conversely, the private channel is used when you want to restrict event notifications to certain private channels.

In our use-case, we want to notify users when they get a new message. And to be eligible to receive broadcast notifications, the user must be logged in. Thus, we'll need to use the private channel in our case.

Core Authentication Feature

Firstly, you need to enable the default Laravel authentication system so that features like registration, login and the like work out of the box. If you're not sure how to do that, the official documentation provides a quick insight into that.

Pusher SDK—Installation and Configuration

As we're going to use the Pusher third-party service as our web-socket server, you need to create an account with it and make sure you have the necessary API credentials with your post registration. If you're facing any trouble creating it, don't hesitate to ask me in the comment section.

Next, we need to install the Pusher PHP SDK so that our Laravel application can send broadcast notifications to the Pusher web-socket server.

In your Laravel application root, run the following command to install it as a composer package.

Next, I had to make a few changes in a couple of core Laravel files in order to make it compatible with the latest Pusher SDK. Of course, I don't recommend making any changes in the core framework, but I'll just highlight what needs to be done.

Go ahead and open the vendor/laravel/framework/src/Illuminate/Broadcasting/Broadcasters/PusherBroadcaster.php file. Just replace the snippet use Pusher; with use Pusher\Pusher;.

Next, let's open the vendor/laravel/framework/src/Illuminate/Broadcasting/BroadcastManager.php file and make a similar change in the following snippet.

Finally, let's enable the broadcast service in config/app.php by removing the comment in the following line.

App\Providers\BroadcastServiceProvider::class,

So far, we've installed server-specific libraries. In the next section, we'll go through client libraries that need to be installed as well.

Pusher and Laravel Echo Libraries—Installation and Configuration

In broadcasting, the responsibility of the client side is to subscribe to channels and listen for desired events. Under the hood, it accomplishes it by opening a new connection to the web-socket server.

Luckily, we don't have to implement any complex JavaScript stuff to achieve it as Laravel already provides a useful client library, Laravel Echo, that helps us deal with sockets on the client side. Also, it supports the Pusher service that we're going to use in this article.

You can install Laravel Echo using the NPM package manager. Of course, you need to install node and npm in the first place if you don't have them already. The rest is pretty simple, as shown in the following snippet.

$npm install laravel-echo

What we're interested in is the node_modules/laravel-echo/dist/echo.js file that you should copy to public/echo.js.

Yes, I understand, it's a bit of overkill to just get a single JavaScript file. If you don't want to go through this exercise, you can download the echo.js file from my GitHub.

And with that, we're done with our client libraries setup.

Back-End File Setup

Recall that we were talking about setting up an application that allows users of our application to send messages to each other. On the other hand, we'll send broadcast notifications to users that are logged in when they receive a new message from other users.

In this section, we'll create the files that are required in order to implement the use-case that we're looking for.

To start with, let's create the Message model that holds messages sent by users to each other.

$php artisan make:model Message --migration

We also need to add a few fields like to, from and message to our messages table. So let's change the migration file before running the migrate command.

Now, let's run the migrate command that creates the messages table in the database.

$php artisan migrate

Whenever you want to raise a custom event in Laravel, you should create a class for that event. Based on the type of event, Laravel reacts accordingly and takes the necessary actions.

If the event is a normal event, Laravel calls the associated listener classes. On the other hand, if the event is of broadcast type, Laravel sends that event to the web-socket server that's configured in the config/broadcasting.php file.

As we're using the Pusher service in our example, Laravel will send events to the Pusher server.

Let's use the following artisan command to create a custom event class—NewMessageNotification.

$php artisan make:event NewMessageNotification

That should create the app/Events/NewMessageNotification.php class. Let's replace the contents of that file with the following.

The important thing to note is that the NewMessageNotification class implements the ShouldBroadcastNow interface. Thus, when we raise an event, Laravel knows that this event should be broadcast.

In fact, you could also implement the ShouldBroadcast interface, and Laravel adds an event into the event queue. It'll be processed by the event queue worker when it gets a chance to do so. In our case, we want to broadcast it right away, and that's why we've used the ShouldBroadcastNow interface.

In our case, we want to display a message the user has received, and thus we've passed the Message model in the constructor argument. In this way, the data will be passed along with the event.

Next, there is the broadcastOn method that defines the name of the channel on which the event will be broadcast. In our case, we've used the private channel as we want to restrict the event broadcast to logged-in users.

The $this->message->to variable refers to the ID of the user to which the event will be broadcast. Thus, it effectively makes the channel name like user.{USER_ID}.

In the case of private channels, the client must authenticate itself before establishing a connection with the web-socket server. It makes sure that events that are broadcast on private channels are sent to authenticated clients only. In our case, it means that only logged-in users will be able to subscribe to our channel user.{USER_ID}.

If you're using the Laravel Echo client library for channel subscription, you're in luck! It automatically takes care of the authentication part, and you just need to define the channel routes.

Let's go ahead and add a route for our private channel in the routes/channels.php file.

As you can see, we've defined the user.{toUserId} route for our private channel.

The second argument of the channel method should be a closure function. Laravel automatically passes the currently logged-in user as the first argument of the closure function, and the second argument is usually fetched from the channel name.

When the client tries to subscribe to the private channel user.{USER_ID}, the Laravel Echo library does the necessary authentication in the background using the XMLHttpRequest object, or more commonly known as XHR.

So far, we've finished with the setup, so let's go ahead and test it.

Front-End File Setup

In this section, we'll create the files that are required to test our use-case.

Let's go ahead and create a controller file at app/Http/Controllers/MessageController.php with the following contents.

Firstly, we load the necessary client libraries, Laravel Echo and Pusher, allowing us to open the web-socket connection to the Pusher web-socket server.

Next, we create the instance of Echo by providing Pusher as our broadcast adapter and other necessary Pusher-related information.

Moving further, we use the private method of Echo to subscribe to the private channel user.{USER_ID}. As we discussed earlier, the client must authenticate itself before subscribing to the private channel. Thus the Echo object performs the necessary authentication by sending the XHR in the background with necessary parameters. Finally, Laravel tries to find the user.{USER_ID} route, and it should match the route that we've defined in the routes/channels.php file.

If everything goes fine, you should have a web-socket connection open with the Pusher web-socket server, and it's listing events on the user.{USER_ID} channel! From now on, we'll be able to receive all incoming events on this channel.

In our case, we want to listen for the NewMessageNotification event and thus we've used the listen method of the Echo object to achieve it. To keep things simple, we'll just alert the message that we've received from the Pusher server.

So that was the setup for receiving events from the web-sockets server. Next, we'll go through the send method in the controller file that raises the broadcast event.

In our case, we're going to notify logged-in users when they receive a new message. So we've tried to mimic that behavior in the send method.

Next, we've used the event helper function to raise the NewMessageNotification event. Since the NewMessageNotification event is of ShouldBroadcastNow type, Laravel loads the default broadcast configuration from the config/broadcasting.php file. Finally, it broadcasts the NewMessageNotification event to the configured web-socket server on the user.{USER_ID} channel.

In our case, the event will be broadcast to the Pusher web-socket server on the user.{USER_ID} channel. If the ID of the recipient user is 1, the event will be broadcast over the user.1 channel.

As we discussed earlier, we already have a setup that listens to events on this channel, so it should be able to receive this event, and the alert box is displayed to the user!

Let's go ahead and walk through how you are supposed to test the use-case that we've built so far.

Open the URL http://your-laravel-site-domain/message/index in your browser. If you're not logged in yet, you'll be redirected to the login screen. Once you're logged in, you should see the broadcast view that we defined earlier—nothing fancy yet.

In fact, Laravel has done a quite a bit of work in the background already for you. As we've enabled the Pusher.logToConsole setting provided by the Pusher client library, it logs everything in the browser console for debugging purposes. Let's see what's being logged to the console when you access the http://your-laravel-site-domain/message/index page.

It has opened the web-socket connection with the Pusher web-socket server and subscribed itself to listen to events on the private channel. Of course, you could have a different channel name in your case based on the ID of the user that you're logged in with. Now, let's keep this page open as we move to test the send method.

As you can see, it tells you that you've just received the App\Events\NewMessageNotification event from the Pusher web-socket server on the private-user.2 channel.

In fact, you can see what's happening out there at the Pusher end as well. Go to your Pusher account and navigate to your application. Under the DebugConsole, you should be able to see messages being logged.

And that brings us to the end of this article! Hopefully, it wasn't too much in a single go as I've tried to simplify things to the best of my knowledge.

Conclusion

Today, we went through one of the least discussed features of Laravel—broadcasting. It allows you to send real-time notifications using web sockets. Throughout the course of this article, we built a real-world example that demonstrated the aforementioned concept.

Yes I know, it's a lot of stuff to digest in a single article, so feel free to use the comment feed below should you find yourself in trouble during implementation.

]]>2018-05-07T12:26:24+00:00//www.4elements.com/blog/read/getting-started-with-redux-why-redux
https://www.4elements.com/blog/read/getting-started-with-redux-why-redux#When:13:15:59ZWhen you're learning React, you will almost always hear people say how great Redux is and that you should give it a try. The React ecosystem is growing at a swift pace, and there are so many libraries that you can hook up with React, such as flow, redux, middlewares, mobx, etc.

Learning React is easy, but getting used to the entire React ecosystem takes time. This tutorial is an introduction to one of the integral components of the React ecosystem—Redux.

Basic Non-Redux Terminology

Here are some of the commonly used terminologies that you may not be familiar with, but they are not specific to Redux per se. You can skim through this section and come back here when/if something doesn't make sense.

Pure Function

A pure function is just a normal function with two additional constraints that it has to satisfy:

Given a set of inputs, the function should always return the same output.

It produces no side effects.

For instance, here is a pure function that returns the sum of two numbers.

Observable Side Effects

"Observable side effects" is a fancy term for interactions made by a function with the outside world. If a function tries to write a value into a variable that exists outside the function or tries to call an external method, then you can safely call these things side effects.

However, if a pure function calls another pure function, then the function can be treated as pure. Here are some of the common side effects:

making API calls

logging to console or printing data

mutating data

DOM manipulation

retrieving the current time

Container and Presentational Components

Splitting the component architecture into two is useful while working with React applications. You can broadly classify them into two categories: container components and presentational components. They are also popularly known as smart and dumb components.

The container component is concerned with how things work, whereas presentational components are concerned with how things look. To understand the concepts better, I've covered that in another tutorial: Container vs. Presentational Components in React.

Mutable vs. Immutable Objects

A mutable object can be defined as follows:

A mutable object is an object whose state can be modified after it is created.

Immutability is the exact opposite—an immutable object is an object whose state cannot be modified after it is created. In JavaScript, strings and numbers are immutable, but objects and arrays are not. The example demonstrates the difference better.

What Is Redux?

The official page defines Redux as follows:

Redux is a predictable state container for JavaScript applications.

Although that accurately describes Redux, it's easy to get lost when you see the bigger picture of Redux for the first time. It has so many moving pieces that you need to fit together. But once you do, I promise you, you'll start loving Redux.

Redux is a state management library that you can hook up with any JavaScript library, and not just React. However, it works very well with React because of React's functional nature. To understand this better, let's have a look at the state.

As you can see, a component's state determines what gets rendered and how it behaves. The application has an initial state, and any user interaction triggers an action that updates the state. When the state is updated, the page is rerendered.

With React, each component has a local state that is accessible from within the component, or you can pass them down as props to child components. We usually use the state to store:

UI state and transitionary data. This includes a list of UI elements for navigation menu or form inputs in a controlled component.

Application state such as data fetched from a server, the login state of the user, etc.

Storing application data in a component's state is okay when you have a basic React application with a few components.

Component hierarchy of a basic application

However, most real-life apps will have lots more features and components. When the number of levels in the component hierarchy increases, managing the state becomes problematic.

Sketch of a medium-sized application

Why Should You Use Redux?

Here is a very probable scenario that you might come across while working with React.

You are building a medium-sized application, and you have your components neatly split into smart and dumb components.

The smart components handle the state and then pass them down to the dumb components. They take care of making API calls, fetching the data from the data source, processing the data, and then setting the state. The dumb components receive the props and return the UI representation.

When you're about to write a new component, it's not always clear where to place the state. You could let the state be part of a container that's an immediate parent of the presentational component. Better yet, you could move the state higher up in the hierarchy so that the state is accessible to multiple presentational components.

When the app grows, you see that the state is scattered all over the place. When a component needs to access the state that it doesn't immediately have access to, you will try to lift the state up to the closest component ancestor.

After constant refactoring and cleaning up, you end up with most of the state holding places at the top of the component hierarchy.

Finally, you decide that it's a good idea to let a component at the top handle the state globally and then pass everything down. Every other component can subscribe to the props that they need and ignore the rest.

This is what I've personally experienced with React, and lots of other developers will agree. React is a view library, and it's not React's job to specifically manage state. What we are looking for is the Separation of Concerns principle.

Redux helps you to separate the application state from React. Redux creates a global store that resides at the top level of your application and feeds the state to all other components. Unlike Flux, Redux doesn't have multiple store objects. The entire state of the application is within that store object, and you could potentially swap the view layer with another library with the store intact.

The components re-render every time the store is updated, with very little impact on performance. That's good news, and this brings tons of benefits along with it. You can treat all your React components as dumb, and React can just focus on the view side of things.

Now that we know why Redux is useful, let's dive into the Redux architecture.

The Redux Architecture

When you're learning Redux, there are a few core concepts that you need to get used to. The image below describes the Redux architecture and how everything is connected together.

Redux in a nutshell

If you're used to Flux, some of the elements might look familiar. If not, that's okay too because we're going to cover everything from the base. First, make sure that you have redux installed:

npm install redux

Use create-react-app or your favorite webpack configuration to set up the development server. Since Redux is an independent state management, we're not going to plug in React yet. So remove the contents of index.js, and we'll play around with Redux for the rest of this tutorial.

Store

The store is one big JavaScript object that has tons of key-value pairs that represent the current state of the application. Unlike the state object in React that is sprinkled across different components, we have only one store. The store provides the application state, and every time the state updates, the view rerenders.

However, you can never mutate or change the store. Instead, you create new versions of the store.

(previousState, action) => newState

Because of this, you can do time travel through all the states from the time the app was booted on your browser.

The store has three methods to communicate with the rest of the architecture. They are:

Store.getState()—To access the current state tree of your application.

Store.dispatch(action)—To trigger a state change based on an action. More about actions below.

Store.subscribe(listener)—To listen to any change in the state. It will be called every time an action is dispatched.

Let's create a store. Redux has a createStore method to create a new store. You need to pass it a reducer, although we don't know what that is. So I will just create a function called reducer. You may optionally specify a second argument that sets the initial state of the store.

So how do we update the store? Redux has something called actions that make this happen.

Action/Action Creators

Actions are also plain JavaScript objects that send information from your application to the store. If you have a very simple counter with an increment button, pressing it will result in an action being triggered that looks like this:

{
type: "INCREMENT",
payload: 1
}

They are the only source of information to the store. The state of the store changes only in response to an action. Each action should have a type property that describes what the action object intends to do. Other than that, the structure of the action is completely up to you. However, keep your action small because an action represents the minimum amount of information required to transform the application state.

For instance, in the example above, the type property is set to "INCREMENT", and an additional payload property is included. You could rename the payload property to something more meaningful or, in our case, omit it entirely. You can dispatch an action to the store like this.

store.dispatch({type: "INCREMENT", payload: 1});

While coding Redux, you won't normally use actions directly. Instead, you will be calling functions that return actions, and these functions are popularly known as action creators. Here is the action creator for the increment action that we discussed earlier.

If you head to the browser console, you will see that it's working, partially. We get undefined because we haven't yet defined the reducer.

So now we have covered actions and the store. However, we need a mechanism to convert the information provided by the action and transform the state of the store. Reducers serve this purpose.

Reducers

An action describes the problem, and the reducer is responsible for solving the problem. In the earlier example, the incrementCount method returned an action that supplied information about the type of change that we wanted to make to the state. The reducer uses this information to actually update the state. There's a big point highlighted in the docs that you should always remember while using Redux:

Given the same arguments, a Reducer should calculate the next state and return it. No surprises. No side effects. No API calls. No mutations. Just a calculation.

What this means is that a reducer should be a pure function. Given a set of inputs, it should always return the same output. Beyond that, it shouldn't do anything more. Also, a reducer is not the place for side effects such as making AJAX calls or fetching data from the API.

The reducer accepts two arguments—state and action—and it returns a new state.

(previousState, action) => newState

The state accepts a default value, the initialState, which will be used only if the value of the state is undefined. Otherwise, the actual value of the state will be retained. We use the switch statement to select the right action. Refresh the browser, and everything works as expected.

Let's add a case for DECREMENT, without which the counter is incomplete.

Summary

This tutorial was meant to be a starting point for managing state with Redux. We've covered everything essential needed to understand the basic Redux concepts such as the store, actions, and reducers. Towards the end of the tutorial, we also created a working redux demo counter. Although it wasn't much, we learned how all the pieces of the puzzle fit together.

Over the last couple of years, React has grown in popularity. In fact, we have a number of items in the marketplace that are available for purchase, review, implementation, and so on. If you’re looking for additional resources around React, don’t hesitate to check them out.

In the next tutorial, we will make use of the things we've learned here to create a React application using Redux. Stay tuned until then. Share your thoughts in the comments.

]]>2018-05-04T13:15:59+00:00//www.4elements.com/blog/read/20-useful-php-scripts-available-on-codecanyon
https://www.4elements.com/blog/read/20-useful-php-scripts-available-on-codecanyon#When:19:18:16ZFor many, PHP is the lifeblood of web development.

It may be a general-purpose scripting language, but it powers WordPress, Drupal, Magento, and more; not to mention the thousands of individual PHP scripts available. If you've got a problem that needs an online solution, more than likely, you can solve it by creating a PHP script—or by downloading something already built.

PHP is clearly suited for web development. Take these 20 popular PHP scripts available on Envato Market, for example:

There's no need to use an entire CMS to handle user logins and have private pages that can only be viewed by logged-in visitors to your website.

This can easily be done by leveraging PHP Login & User Management, a MySQL-powered website PHP login script. You can even change User Levels using the built-in Control Panel when you need different levels of page security.

This script includes:

captcha integration

profiles

social media login support

login expiration

lost password activation code email

welcome and activation emails

as well as many control panel features for Admin

With the installation wizard and the HTML5 Twitter Bootstrap design, you'll be up and running with solid PHP Login & User Management in no time.

If you need a CRM for your business, or maybe you want to up your freelance project management, instead of adding another monthly fee to your expenses, why not host your own customer and project management system?

More specifically, the Ultimate Client Manager - CRM - Pro Edition.

UCM Pro really does pack an impressive punch of features. You can:

enjoy industry-standard PGP/RSA encrypted fields

email support tickets

organize your leads, customers, projects, and invoices

have your customers log in and see their project status

enable subscription billing features to help organize and automate client billing

convert invoices into PDF documents

make multiple currency and tax rate adjustments

have customers and staff upload project files

This is only a fraction of the useful features you'll find. And while this CRM contender is robust enough to challenge many other subscription-based CRMs, it's the little things like being able to change your CRM theme that give the Ultimate Client Manager - CRM - Pro Edition that extra polish.

When it comes to managing customer relationships, there are a wide variety of solutions. Truth be told, it's not a one-size-fits-all solution, which is why it's a good thing to have a number of choices.

Since we're all looking for a different set of features as it relates to CRM systems, here are some of the things that Perfex offers its users:

Build professional, great-looking estimates and invoices.

Powerful support system with the ability to auto-import tickets.

Track time spent on tasks and bill your customers. Ability to assign multiple staff members on task and track time per assigned staff member.

Add task followers even if the staff member is not a project member. The staff member will be able to track the task progress without accessing the project.

Keep track of leads in one place and easily follow their progress. Ability to auto import leads from email, add notes, and create proposals. Organize your leads in stages and change stages easily with drag and drop.

Create good-looking proposals for leads or customers and increase sales.

Record your company/project expenses and have the ability to bill to your customers and auto-convert to invoices.

The Twitter Bootstrap powered design—complete with modal popups, tabs, alerts, and more—looks great on the desktop or smartphone. But it's the feature set of this PHP script that really catches your eye:

results sorted from nearest to furthest

use Google Street View

unlimited locations

bulk CSV import

autofill search field

multi-language support

users can request to add locations

multiple admins

add your own map markers

and much, much, more

This is a great way to leverage Google Maps into your website for both desktop and mobile users, and includes enough unique features to use Super Store Finder for more than your stereotypical use cases.

If you're serious about having your own email marketing application, this is a great place to start. In fact, the MailWizz - Email Marketing Application is robust and feature-rich enough for you to become an email service provider for your customers!

You'll have no problem sending tens of thousands of emails in just an hour, or importing and exporting subscriber lists, reports, and stats; not to mention enjoying IP location services, and best of all, unlimited lists and subscribers.

If you're a freelancer, or even a small business, and you're looking for an all-in-one solution to help manage the overhead for all things that you're doing related to your business, check out Freelance Cockpit.

Keep up to date and share the current exchange rate for over 1,000 different cryptocurrencies with the Coin Table - Cryptocurrency Market CMS. Easily manage it within its own admin panel and create multiple authenticated users.

"Coin Table is a Content Management System built for Cryptocurrency Real-time Information."

If you've used Bit.ly very much—especially if you're using a custom domain—you'll find there's a giant leap between their free and paid service. That makes something like Premium URL Shortener a "no-brainer".

This PHP URL shortener was built with performance in mind, and that's exactly what it does. It comes complete with a powerful dashboard, admin, and geotargeting, and it's fully social-media ready. You'll not only enjoy using Premium URL Shortener, but maybe even take advantage of the new built-in membership system.

Ever since Instagram introduced videos, anyone and everyone who uses the service has seen the sponsored posts.

But what if you were able to leverage the platform to market your own product without needing to use the sponsorship features they provide? Or what if you were able to target people, likes, comments, posts, etc., all from within a single application.

Project management is one of those areas of running a business that some prefer more than others. If you're a freelancer, it comes with the territory; if you're part of a larger business, then it may be your role.

Regardless, finding the best way to manage said projects can be tough. Perhaps Rise is a viable solution?

Ultimate Project Manager is the best way to manage your projects, clients and team members. You can easily collaborate with your team and monitor your work. It’s easy to use & install.

And it has a ton of features, to boot. Some of the examples include:

Projects. Manage all your projects using some amazing tools. Create tasks in projects and assign your team members on the tasks. Create milestones to estimate the timeframe. Upload files by dragging and dropping in projects and discuss with your team. Let your team members comment on tasks and get notifications for important events. See activity logs for projects.

Clients. It’s very simple to add your clients in Rise. You’ll get detailed information about contacts, projects, invoices, payments, estimates, tickets and notes of each client. You can allow your clients to use the client portal. Each client will get a separate dashboard to see their projects. Let your clients create tasks for the projects and get feedback instantly.

Team members. Assign tasks to your team members and monitor the status easily. You can set different permissions on their access.

Invoices. Send invoices to your clients by email with a PDF copy of the invoice. And get paid online via Stripe and PayPal.

If you find yourself in this role, then I highly recommend checking out what Rise has to offer and see if it fits the bill. In a field that's got a lot of competition, this particular product may hit the right price point.

Conclusion

You can clearly see how versatile PHP is—it can be used for anything from simple solutions to full social networks and project management.

And if you're curious to know what other PHP scripts are out there, take a peek at what's on offer at Envato Market.

]]>2018-05-03T19:18:16+00:00//www.4elements.com/blog/read/how-to-find-and-fix-poor-page-load-times-with-raygun
https://www.4elements.com/blog/read/how-to-find-and-fix-poor-page-load-times-with-raygun#When:14:43:46ZIn this tutorial, we'll focus on finding and fixing poor page load times with Raygun. But before we do that, let's discuss why slightly longer page load times can be such a big deal.

One of the most important things that you can do to make a good first impression on potential customers or clients visiting your website is improve its loading speed.

Imagine a customer who just heard about your company from a friend. You sell a product online which users can purchase by visiting your website. If different website pages are taking a long time to load and you are not selling that product exclusively, there is a good chance that the customer will abandon your site and go somewhere else.

You did not just miss out on your first sale here, you also missed the opportunity to have a loyal customer who would have purchased more products in the future.

That's the thing with the Internet—people are just a few clicks away from leaving your website and buying something from your competitors. Faster loading pages can give you an edge over competitors and increase your revenue.

How Can Raygun Help?

Raygun relies on Real User Monitoring Insights (RUM Insights) to improve a website's performance and page load time. The term "Real User Monitoring" is the key here. You could use tools like WebPagetest and Google Page Speed Insights to optimize individual pages, but those results will not be based on real user data. On the other hand, the data provided by Raygun is based on real users who visited your website.

Raygun also presents the information in a more organized manner by telling you things like the average page speed for the website, the most requested pages, and the slowest pages. This way, you can prioritize which page or section of the website needs to be optimized first.

You can also see how fast the website is loading for users in different countries or for users with different browsers. Similarly, you can compare the speed of your website on mobile vs. desktop.

Another advantage of Raygun is that it will show you how the website is performing for different users. For example, the website may be loading slowly for one of your most valuable clients. In such cases, you would definitely like to know about it and do something to improve their experience before it is too late.

We will learn how to do all that with Raygun in the next few sections of this article.

Integrating Raygun Into Your Website

You need to sign up for an account before integrating Raygun into your website. This account will give you access to all Raygun features for free for 14 days.

Once you have registered successfully, you can click on the Create Application button to create a new application. You can fill out a name for your application on the next screen and then check some boxes to receive notifications about errors and real user monitoring insights.

Now you just have to select your development platform or framework. In this case, we are using JavaScript.

Finally, you will get some code that you have to add on all the pages you want to monitor. Instead of placing the following code in your website, you could also download the production or development version of the library and include it yourself.

If you don't add any more code, Raygun will now start collecting anonymous data. This means that you will be able to know how your website is performing for different users, but there will be no way to identify those users.

There is an easy fix for this problem. All you have to do is add the following code in your webpages and Raygun will take care of the rest.

You will have to include these three pieces of code in all the pages that you want to track. Once done, the data will start showing up in the dashboard for you to analyze.

Finding Pages With Poor Load Times

The Real User Monitoring section in the Raygun dashboard has a lot of tabs to present the data in different formats. We will briefly go over all these tabs to show you how the information presented in them can be used to find pages with poor load times.

The Live tab will give you an overview of your site's performance in real time. It has different metrics like Health Score to show you how the site is currently performing. You can read more about all these metrics in the documentation for the Live tab on the Raygun website.

It also has a world map to point out the countries of your currently
active users. You will also find a list of most recent requests to your
website by different users. Here is an image showing the most recent requests to our website.

The performance tab has five useful metrics to give you a quick overview of the website page load times. For example, a median load time of 1.41 seconds means that 50% of your pages load before 1.41 seconds have passed. Similarly, a P90 Load Time of 6.78 seconds tells you that 90% of the time, the website loads before 6.48 seconds.

This should give you a rough idea of the performance of a website and how slow it is for the slowest 10% of users.

The performance tab also has a list of the slowest and most requested pages at the bottom. Knowing the most popular and the slowest pages can be very helpful when you want to prioritize which sections of your website need to be fixed first.

Even though all pages in a website should load as quickly as possible, some of these pages are more important than others. Therefore, you might be interested in finding out the performance of a particular page on a website. You can do so by simply typing the page you are looking for in the input field. This will give you information about the median, average, and P90 load time of a particular page. Here is the data for the home page of our website.

You can use the Sessions tab to see session-related information like the total number of sessions, total number of users, and median session length. The sessions table will give you a quick overview of the last 150 sessions with information like the country, duration, total page views, and the last visited page for a session.

Clicking on the magnifying glass will show you more details of a particular session like the pages the user visited, the load time of those pages, and the browser/device used during the session.

The Users tab will give you an overview of the satisfaction level of different users with your website. This can be very helpful when you want to see how a particular user is interacting with your website and if or why their page load time is more than expected.

There are three other tabs to show information about all the page views in terms of browsers, platforms, and geography. This way you will be able to know if a webpage is loading slowly only on a particular browser or platform. You will also have a rough idea of the distribution of users. For instance, knowing if most of your clients are from a particular country or use a particular browser can be very handy.

Raygun lists the percentage of visitors from a particular continent at the top of the Geo tab. After that, it provides a map with the distribution of load times. Countries with the slowest load times are filled with red, and countries with quick load times are filled with green.

If you are consistently getting poor loading times from a particular country, it might be worth your time to look closely and find out the reason.

Fixing Poor Page Load Times

In the previous section, we learned how to use all the data collected by Raygun to figure out which pages are taking a long time to load or if there are any countries where our page load times are longer than usual.

Now it is time to see how we can use Raygun to discover issues which might be causing a particular page or the whole website to load slower than usual.

Improving poor page load time of a website can be pretty overwhelming, especially if the website is very complicated or if it has a lot of pages. The trouble is in finding what to improve and where to start.

Luckily, Raygun can offer you some general insights to fix your website. You can click on the Insights options under the Real User Monitoring menu, and Raygun will scan your website for any potential issues. You can find a list of all these rules in the official Raygun documentation. Fixing all the listed issues will significantly speed up your website.

Besides following these general guidelines, you might also want to isolate the pages that have been loading slowly. Once you have isolated them, Raygun can show you the time they take to resolve DNS, latency, SSL handshake, etc. This will give you a good idea of the areas where you can make improvements to reduce the page load time. The following image should make it clear.

You can also filter the data in order to get a more accurate picture of the load time for a particular page and various factors affecting it. The above image showed you the average latency for all requests made to the "About Us" page. However, you can click on the Add Filter button at the top and only see the "About Us" loading time graph for a specific country like Italy.

You will also see all the requests made by a specific page at the bottom. Basically, you will be able to see the DNS, latency, SSL, server, and transfer time for every resource loaded for a specific page and see if any of them is the culprit.

Once you find out which resources are taking too long to load, you can start optimizing your pages.

Final Thoughts

As you saw in this tutorial, Raygun can be of great help to organizations looking to improve their page load times. It is super easy to integrate, and after successful integration, the data will simply start showing up in the dashboard without any intervention from your side.

Raygun also has different tabs to organize the collected data so that you can analyze it more easily and efficiently. For example, it can show you load times for different countries, browsers, and platforms. It also has filters that you can use to isolate a particular set of data from the rest and analyze it closely.

If you or your company are looking for an easy-to-integrate tool which can provide great insights about how your real users are interacting with your website, you should definitely give Raygun a try. You don't have anything to lose since it is free for the first 14 days!

And while you're here, check out some of our other tutorials on Raygun!

]]>2018-04-26T14:43:46+00:00//www.4elements.com/blog/read/notifications_in_laravel
https://www.4elements.com/blog/read/notifications_in_laravel#When:12:00:00ZIn this article, we're going to explore the notification system in the Laravel web framework. The notification system in Laravel allows you to send notifications to users over different channels. Today, we'll discuss how you can send notifications over the mail channel.

Basics of Notifications

During application development, you often need to notify users about different state changes. It could be either sending email notifications when the order status is changed or sending an SMS about their login activity for security purposes. In particular, we're talking about messages that are short and just provide insight into the state changes.

Laravel already provides a built-in feature that helps us achieve something similar—notifications. In fact, it makes sending notification messages to users a breeze and a fun experience!

The beauty of that approach is that it allows you to choose from different channels notifications will be sent on. Let's quickly go through the different notification channels supported by Laravel.

Mail: The notifications will be sent in the form of email to users.

SMS: As the name suggests, users will receive SMS notifications on their phone.

Slack: In this case, the notifications will be sent on Slack channels.

Database: This option allows you to store notifications in a database should you wish to build a custom UI to display it.

Among different notification channels, we'll use the mail channel in our example use-case that we're going to develop over the course of this tutorial.

In fact, it'll be a pretty simple use-case that allows users of our application to send messages to each user. When users receive a new message in their inbox, we'll notify them about this event by sending an email to them. Of course, we'll do that by using the notification feature of Laravel!

Create a Custom Notification Class

As we discussed earlier, we are going to set up an application that allows users of our application to send messages to each other. On the other hand, we'll notify users when they receive a new message from other users via email.

In this section, we'll create necessary files that are required in order to implement the use-case that we're looking for.

To start with, let's create the Message model that holds messages sent by users to each other.

$php artisan make:model Message --migration

We also need to add a few fields like to, from and message to the messages table. So let's change the migration file before running the migrate command.

Now, let's run the migrate command that creates the messages table in the database.

$php artisan migrate

That should create the messages table in the database.

Also, make sure that you have enabled the default Laravel authentication system in the first place so that features like registration and login work out of the box. If you're not sure how to do that, the Laravel documentation provides a quick insight into that.

Since each notification in Laravel is represented by a separate class, we need to create a custom notification class that will be used to notify users. Let's use the following artisan command to create a custom notification class—NewMessage.

$php artisan make:notification NewMessage

That should create the app/Notifications/NewMessage.php class, so let's replace the contents of that file with the following contents.

As we're going to use the mail channel to send notifications to users, the via method is configured accordingly. So this is the method that allows you to configure the channel type of a notification.

Next, there's the toMail method that allows you to configure various email parameters. In fact, the toMail method should return the instance of \Illuminate\Notifications\Messages\MailMessage, and that class provides useful methods that allow you to configure email parameters.

Among various methods, the line method allows you to add a single line in a message. On the other hand, there's the action method that allows you to add a call-to-action button in a message.

In this way, you could format a message that will be sent to users. So that's how you're supposed to configure the notification class while you're using the mail channel to send notifications.

At the end, you need to make sure that you implement the necessary methods according to the channel type configured in the via method. For example, if you're using the database channel that stores notifications in a database, you don't need to configure the toMail method; instead, you should implement the toArray method, which formats the data that needs to be stored in a database.

How to Send Notifications

In the previous section, we created a notification class that's ready to send notifications. In this section, we'll create files that demonstrate how you could actually send notifications using the NewMessage notification class.

Let's create a controller file at app/Http/Controllers/NotificationController.php with the following contents.

Of course, you need to add an associated route in the routes/web.php file.

Route::get('notify/index', 'NotificationController@index');

There are two ways Laravel allows you to send notifications: by using either the notifiable entity or the Notification facade.

If the entity model class utilizes the Illuminate\Notifications\Notifiable trait, then you could call the notify method on that model. The App\User class implements the Notifiable trait and thus it becomes the notifiable entity. On the other hand, you could also use the Illuminate\Support\Facades\Notification Facade to send notifications to users.

Let's go through the index method of the controller.

In our case, we're going to notify users when they receive a new message. So we've tried to mimic that behavior in the index method in the first place.

Next, we've notified the recipient user about a new message using the notify method on the $toUser object, as it's the notifiable entity.

$toUser->notify(new NewMessage($fromUser));

You may have noticed that we also pass the $fromUser object in the first argument of the __construct method, since we want to include the from username in a message.

On the other hand, if you want to mimic it using the Notification facade, it's pretty easy to do so using the following snippet.

Notification::send($toUser, new NewMessage($fromUser));

As you can see, we've used the send method of the Notification facade to send a notification to a user.

Go ahead and open the URL http://your-laravel-site-domain/notify/index in your browser. If you're not logged in yet, you'll be redirected to the login screen. Once you're logged in, you should receive a notification email at the email address that's attached with the user 1.

You may be wondering how the notification system detects the to address when we haven't configured it anywhere yet. In that case, the notification system tries to find the email property in the notifiable object. And the App\User object class already has that property as we're using the default Laravel authentication system.

However, if you would like to override this behavior and you want to use a different property other than email, you just need to define the following method in your notification class.

Now, the notification system should look for the email_address property instead of the email property to fetch the to address.

And that's how to use the notification system in Laravel. That brings us to the end of this article as well!

Conclusion

What we've gone through today is one of the useful, yet least discussed, features in Laravel—notifications. It allows you to send notifications to users over different channels.

After a quick introduction, we implemented a real-world example that demonstrated how to send notifications over the mail channel. In fact, it's really handy in the case of sending short messages about state changes in your application.

For those of you who are either just getting started with Laravel or looking to expand your knowledge, site, or application with extensions, we have a variety of things you can study in Envato Market.

Should you have any queries or suggestions, don't hesitate to post them using the feed below!

]]>2018-04-23T12:00:00+00:00//www.4elements.com/blog/read/introduction_to_the_stimulus_framework
https://www.4elements.com/blog/read/introduction_to_the_stimulus_framework#When:12:15:48ZThere are lots of JavaScript frameworks out there. Sometimes I even start to think that I'm the only one who has not yet created a framework. Some solutions, like Angular, are big and complex, whereas some, like Backbone (which is more a library than a framework), are quite simple and only provide a handful of tools to speed up the development process.

In today's article I would like to present you a brand new framework called Stimulus. It was created by a Basecamp team led by David Heinemeier Hansson, a popular developer who was the father of Ruby on Rails.

Stimulus is a small framework that was never intended to grow into something big. It has its very own philosophy and attitude towards front-end development, which some programmers might like or dislike. Stimulus is young, but version 1 has already been released so it should be safe to use in production. I've played with this framework quite a bit and really liked its simplicity and elegance. Hopefully, you will enjoy it too!

In this post we'll discuss the basics of Stimulus while creating a single-page application with asynchronous data loading, events, state persistence, and other common things.

Introduction to Stimulus

Stimulus was created by developers at Basecamp. Instead of creating single-page JavaScript applications, they decided to choose a majestic monolith powered by Turbolinks and some JavaScript. This JavaScript code evolved into a small and modest framework which does not require you to spend hours and hours learning all its concepts and caveats.

Stimulus is mostly meant to attach itself to existing DOM elements and work with them in some way. It is possible, however, to dynamically render the contents as well. All in all, this framework is quite different from other popular solutions as, for example, it persists state in HTML, not in JavaScript objects. Some developers may find it inconvenient, but do give Stimulus a chance, as it really may surprise you.

The framework has only three main concepts that you should remember, which are:

Controllers: JS classes with some methods and callbacks that attach themselves to the DOM. The attachment happens when a data-controller "magic" attribute appears on the page. The documentation explains that this attribute is a bridge between HTML and JavaScript, just like classes serve as bridges between HTML and CSS. One controller can be attached to multiple elements, and one element may be powered up by multiple controllers.

Actions: methods to be called on specific events. They are defined in special data-action attributes.

Targets: important elements that can be easily accessed and manipulated. They are specified with the help of data-target attributes.

As you can see, the attributes listed above allow you to separate content from behaviour logic in a very simple and natural way. Later in this article, we will see all these concepts in action and notice how easy it is to read an HTML document and understand what's going on.

The quickest way to get started with Stimulus is by utilizing this starter project that has Express web server and Babel already hooked up. It also depends on Yarn, so be sure to install it. To clone the project and install all its dependencies, run:

Some Markup

Suppose we are creating a small single-page application that presents a list of employees and loads information like their name, photo, position, salary, birthdate, etc.

Let's start with the list of employees. All the markup that we are going to write should be placed inside the public/index.html file, which already has some very minimal HTML. For now, we will hard-code all our employees in the following way:

Creating a Controller

As the official documentation explains, the main purpose of Stimulus is to connect JavaScript objects (called controllers) to the DOM elements. The controllers will then bring the page to life. As a convention, controllers' names should end with a _controller postfix (which should be very familiar to Rails developers).

There is a directory for controllers already available called src/controllers. Inside, you will find a hello_controller.js file that defines an empty class:

Navigate to http://localhost:9000. Open your browser's console and make sure both messages are displayed. It means that everything is working as expected!

Adding Events

The next core Stimulus concept is events. Events are used to respond to various user actions on the page: clicking, hovering, focusing, etc. Stimulus does not try to reinvent a bicycle, and its event system is based on generic JS events.

For instance, let's bind a click event to our employees. Whenever this event happens, I would like to call the as yet non-existent choose() method of the employees_controller:

e is the special event object that contains full information about the triggered event. Note, by the way, that this returns the controller itself, not an individual link! In order to gain access to the element that acts as the event's target, use e.target.

Reload the page, click on a list item, and observe the result!

Working With the State

Now that we have bound a click event handler to the employees, I'd like to store the currently chosen person. Why? Having stored this info, we can prevent the same employee from being selected the second time. This will later allow us to avoid loading the same information multiple times as well.

Stimulus instructs us to persist state in the Data API, which seems quite reasonable. First of all, let's provide some arbitrary ids for each employee using the data-id attribute:

Next, we need to fetch the id and persist it. Using the Data API is very common with Stimulus, so a special this.data object is provided for each controller. With its help, we can run the following methods:

this.data.get('name'): get the value by its attribute.

this.data.set('name', value): set the value under some attribute.

this.data.has('name'): check if the attribute exists (returns a boolean value).

Unfortunately, these shortcuts are not available for the targets of the click events, so we must stick with getAttribute() in their case:

Reload the page to make sure that everything still works. You won't notice any visual changes yet, but with the help of the Inspector tool you'll notice that the ul has the data-employees-current-employee attribute with a value that changes as you click on the links. The employees part in the attribute's name is the controller's identifier and is being added automatically.

Now let's move on and highlight the currently chosen employee.

Using Targets

When an employee is selected, I would like to assign the corresponding element with a .chosen class. Of course, we might have solved this task by using some JS selector functions, but Stimulus provides a neater solution.

Meet targets, which allow you to mark one or more important elements on the page. These elements can then be easily accessed and manipulated as needed. In order to create a target, add a data-target attribute with the value of {controller}.{target_name} (which is called a target descriptor):

The idea is simple: we iterate over an array of targets and for each target compare its data-id to the one stored under this.currentEmployee. If it matches, the element is assigned the .chosen class. Otherwise, this class is removed. You may also extract the if (this.currentEmployee !== id) { condition from the setter and use it in the chosen() method instead:

Reload the page once again, click on a person, and make sure that person is being highlighted properly.

Loading Data Asynchronously

Our next task is to load information about the chosen employee. In a real-world application, you would have to set up a hosting provider, a back-end powered by something like Django or Rails, and an API endpoint that responds with JSON containing all the necessary data. But we are going to make things a bit simpler and concentrate on the client side only. Create an employees directory under the public folder. Next, add four files containing data for individual employees:

This method accepts an employee's id and sends an asynchronous fetch request to the given URI. There are also two promises: one to fetch the body and another one to display the loaded info (we'll add the corresponding method in a moment).

That's it! We are now dynamically rendering our list of employees based on the data returned by the server.

Conclusion

In this article we have covered a modest JavaScript framework called Stimulus. We have seen how to create a new application, add a controller with a bunch of callbacks and actions, and introduce events and actions. Also, we've done some asynchronous data loading with the help of fetch requests.

All in all, that's it for the basics of Stimulus—it really does not expect you to have some arcane knowledge in order to craft web applications. Of course, the framework will probably have some new features in future, but the developers are not planning to turn it into a huge monster with hundreds of tools.

If you'd like to find more examples of using Stimulus, you may also check out this tiny handbook. And if you’re looking for additional JavaScript resources to study or to use in your work, check out what we have available in the Envato Market.

Did you like Stimulus? Would you be interested in trying to create a real-world application powered by this framework? Share your thoughts in the comments!

In your terminal, type the following commands to install the react-router and react-transition-group modules respectively.

npm install react-router-dom --save

npm install react-transition-group@1.x --save

After installing the packages, you can check the package.json file inside your main project directory to verify that the modules are included under dependencies.

Router Components

There are basically two different router options: HashRouter and BrowserRouter.

As the name implies, HashRouter uses hashes to keep track of your links, and it is suitable for static servers. On the other hand, if you have a dynamic server, it is a better option to use BrowserRouter, considering the fact that your URLs will be prettier.

Once you decide which one you should use, just go ahead and add the component to your index.js file.

import { HashRouter } from 'react-router-dom'

The next thing is to wrap our <App> component with the router component.

Content.js

Inside our <Content> component, we will define the Routes to match the Links.

We need the Switch and Route components from react-router-dom. So, first of all, import them.

import { Switch, Route } from 'react-router-dom'

Second of all, import the components that we want to route to. These are the Home, Works and About components for our example. Assuming you have already created those components inside the components folder, we also need to import them.

import Home from './Home'

import Works from './Works'

import About from './About'

Those components can be anything. I just defined them as stateless functional components with minimum content. An example template is below. You can use this for all three components, but just don't forget to change the names accordingly.

Notice that the extra exact prop is required for the Home component, which is the main directory. Using exact forces the Route to match the exact pathname. If it's not used, other pathnames starting with / would also be matched by the Home component, and for each link, it would only display the Home component.

Now when you click the menu links, your app should be switching the content.

Animating the Route Transitions

So far, we have a working router system. Now we will animate the route transitions. In order to achieve this, we will use the react-transition-group module.

We will be animating the mounting state of each component. When you route different components with the Route component inside Switch, you are essentially mounting and unmounting different components accordingly.

We will use react-transition-group in each component we want to animate. So you can have a different mounting animation for each component. I will only use one animation for all of them.

As an example, let's use the <Home> component.

First, we need to import CSSTransitionGroup.

import { CSSTransitionGroup } from 'react-transition-group'

Then you need to wrap your content with it.

Since we are dealing with the mounting state of the component, we enable transitionAppear and set a timeout for it. We also disable transitionEnter and transitionLeave, since these are only valid once the component is mounted. If you are planning to animate any children of the component, you have to use them.

Lastly, add the specific transitionName so that we can refer to it inside the CSS file.

If you refresh the page, you should see the fade-in effect of the Home component.

If you apply the same procedure to all the other routed components, you will see their individual animations when you change the content with your Menu.

Conclusion

In this tutorial, we covered the react-router-dom and react-transition-group modules. However, there's more to both modules than we covered in this tutorial. Here is a working demoof what was covered.

So, to learn more features, always go through the documentation of the modules you are using.

Over the last couple of years, React has grown in popularity. In fact, we have a number of items in the marketplace that are available for purchase, review, implementation, and so on. If you’re looking for additional resources around React, don’t hesitate to check them out.

]]>2018-04-20T12:00:00+00:00//www.4elements.com/blog/read/testing_in_laravel
https://www.4elements.com/blog/read/testing_in_laravel#When:12:00:00ZIrrespective of the application you're dealing with, testing is an important and often overlooked aspect that you should give the attention it deserves. Today, we're going to discuss it in the context of the Laravel web framework.

In fact, Laravel already supports the PHPUnit testing framework in the core itself. PHPUnit is one of the most popular and widely accepted testing frameworks across the PHP community. It allows you to create both kinds of tests—unit and functional.

We'll start with a basic introduction to unit and functional testing. As we move on, we'll explore how to create unit and functional tests in Laravel. I assume that you're familiar with basics of the PHPUnit framework as we will explore it in the context of Laravel in this article.

Unit and Functional Tests

If you're already familiar with the PHPUnit framework, you should know that you can divide tests into two flavors—unit tests and functional tests.

In unit tests, you test the correctness of a given function or a method. More importantly, you test a single piece of your code's logic at a given time.

In your development, if you find that the method you've implemented contains more than one logical unit, you're better off splitting that into multiple methods so that each method holds a single logical and testable piece of code.

Let's have a quick look at an example that's an ideal case for unit testing.

public function getNameAttribute($value)
{
return ucfirst($value);
}

As you can see, the method does one and only one thing. It uses the ucfirst function to convert a title into a title that starts with uppercase.

Whereas the unit test is used to test the correctness of a single logical unit of code, the functional test, on the other hand, allows you to test the correctness of a specific use case. More specifically, it allows you to simulate actions a user performs in an application in order to run a specific use case.

For example, you could implement a functional test case for some login functionality that may involve the following steps.

Create the GET request to access the login page.

Check if we are on the login page.

Generate the POST request to post data to the login page.

Check if the session was created successfully.

So that's how you're supposed to create the functional test case. From the next section onward, we'll create examples that demonstrate how to create unit and functional test cases in Laravel.

Setting Up the Prerequisites

Before we go ahead and create actual tests, we need to set up a couple of things that'll be used in our tests.

We will create the Post model and related migration to start with. Go ahead and run the following artisan command to create the Post model.

$php artisan make:model Post --migration

The above command should create the Post model class and an associated database migration as well.

Unit Testing

In the previous section, we did the initial setup that's going to be useful to us in this and upcoming sections. In this section, we are going to create an example that demonstrates the concepts of unit testing in Laravel.

As always, Laravel provides an artisan command that allows you to create the base template class of the unit test case.

Run the following command to create the AccessorTest unit test case class. It's important to note that we're passing the --unit keyword that creates the unit test case, and it'll be placed under the tests/Unit directory.

$php artisan make:test AccessorTest --unit

And that should create the following class at tests/Unit/AccessorTest.php.

As you can see, the code is exactly the same as it would have been in core PHP. We've just imported Laravel-specific dependencies that allow us to use the required APIs. In the testAccessorTest method, we're supposed to test the correctness of the getNameAttribute method of the Post model.

To do that, we've fetched an example post from the database and prepared the expected output in the $db_post_title variable. Next, we load the same post using the Eloquent model that executes the getNameAttribute method as well to prepare the post title. Finally, we use the assertEquals method to compare both variables as usual.

So that's how to prepare unit test cases in Laravel.

Functional Testing

In this section, we'll create the functional test case that tests the functionality of the controller that we created earlier.

Run the following command to create the AccessorTest functional test case class. As we're not using the --unit keyword, it'll be treated as a functional test case and placed under the tests/Feature directory.

Again, the code should look familiar to those who have prior experience in functional testing.

Firstly, we're fetching an example post from the database and preparing the expected output in the $db_post_title variable. Following that, we try to simulate the /accessor/index?id=1 GET request and grab the response of that request in the $response variable.

Next, we've tried to match the response code in the $response variable with the expected response code. In our case, it should be 200 as we should get a valid response for our GET request. Further, the response should contain a title that starts with uppercase, and that's exactly what we're trying to match using the assertSeeText method.

And that's an example of the functional test case. Now, we have everything we could run our tests against. Let's go ahead and run the following command in the root of your application to run all tests.

$phpunit

That should run all tests in your application. You should see a standard PHPUnit output that displays the status of tests and assertions in your application.

And with that, we're at the end of this article.

Conclusion

Today, we explored the details of testing in Laravel, which already supports PHPUnit in its core. The article started with a basic introduction to unit and functional testing, and as we moved on we explored the specifics of testing in the context of Laravel.

In the process, we created a handful of examples that demonstrated how you could create unit and functional test cases using the artisan command.

If you're just getting started with Laravel or looking to expand your knowledge, site, or application with extensions, we have a variety of things you can study in Envato Market.

Don't hesitate to express your thoughts using the feed below!

]]>2018-04-18T12:00:00+00:00//www.4elements.com/blog/read/12_best_contact_form_php_scripts
https://www.4elements.com/blog/read/12_best_contact_form_php_scripts#When:12:53:19ZContact forms are a must have for every website. They encourage your site visitors to engage with you while potentially lowering the amount of spam you get.

For businesses, this engagement with visitors increases the chances of turning them into clients or customers and thus increasing revenue.

Whether your need is for a simple three-line contact form or a more complex one that offers loads of options and functions, you’re sure to find the right PHP contact form here in our 12 Best Contact Form PHP Scripts on CodeCanyon.

There's a reason Quform- Responsive AJAX Contact Form is one of the best-selling PHP contact forms at CodeCanyon. This versatile AJAX contact form can be adapted to be a register form, quote form, or any other form needed. With tons of other customisations available, Quform- Responsive AJAX Contact Form is bound to keep the most discerning user happy.

Best features:

three ready-to-use themes with six variations

ability to integrate into your own theme design

ability to create complex form layouts

file uploads supported

and more

User DigitalOxide says:

"This script is incredible! It is very detailed instructions, examples and is very fully featured. I can't think of anything I could ever need (as far as forms go) that this script is not able to accomplish!"

KONTAKTO only entered the market in March of 2017 but has already developed a name for itself as one of the top-rated scripts in this category. The standout feature of this beautifully designed contact form is the stylish map with a location pin that comes integrated in the form.

Best features:

required field validation

anti-spam with simple Captcha math

defaults to PHP mail but SMTP option available

repeat submission prevention

and more

User vholecek says:

"The design is outstanding and the author is very responsive to questions. I got in a little over my head on the deployment of the template and the author had it sorted out in less than 24 hours."

ContactMe is an incredibly versatile and easily customisable contact form. With 28 ready-to-use styles and 4 different themes, the sky's the limit when it comes to creating the ideal form to fit your needs.

Another CodeCanyon top seller, PHP Form Builder includes the jQuery live validation plugin which enables you to build any type of form, connect your database, insert, update or delete records, and send your emails using customisable HTML/CSS templates.

Best features:

over 50 prebuilt templates included

accepts any HTML5 form elements

default options ready for Bootstrap

email sending

and more

User sflahaut says:

"Excellent product containing ready to use examples of all types of forms. Documentation is excellent and customer support is exceptional as many others have commented previously. I highly recommend this as it can save a lot of time, especially for developers with not a lot of web experience like myself."

Contact Framework has been around for a while, and it’s just gotten better and better with each new update. Its simple yet modern design comes in three themes and five colours, giving you a lot of options for customisation and integration into your site design.

Having made its debut in 2017, SLEEK Contact Form is one of the newest PHP contact form scripts on CodeCanyon. With its simple and stylish design and functionality, it is ideal for creatives or those looking to bring a bit of cool style to their website's contact form.

The Ultimate PHP, HTML5 and AJAX Contact Form replaces the hugely successful AJAX Contact Form and allows you to easily place and manage a self-contained contact form on any page of your existing PHP website.

Best features:

supports file uploads to attach to email

field type validation

multiple forms per page allowed

Google reCAPTCHA capable

and more

User geudde says:

"Awesome coding and impeccable documentation. I had the form embedded in my website in less than 10 minutes, and most of that time was spent signing up with Google for reCAPTCHA. I purchased the same author's AJAX form years ago. The new version is so much more elegant and blends seamlessly into my site."

Perfect Contact Us Form is a Bootstrap-based form which is fully customisable and easy to use. The easy-to-use form integrates well with HTML and PHP pages and will appeal to both beginners and more experienced developers alike.

Best features:

AJAX based

both SMTP and PHP email script

jQuery-based Captcha is included for anti-spam

and more

User andreahale says:

"Excellent support and super fast response time. Quickly helped me with the modifications I wanted to make to the form."

Contact Form Generator is another of CodeCanyon’s best-selling PHP Contact Form Scripts. It features a user-friendly drag-and-drop interface that helps you build contact forms, feedback forms, online surveys, event registrations, etc., and get responses via email in a matter of minutes. It is a highly effective form builder that enables you to create well-designed contact forms and extend their range to include other functions.

Best features:

instant email or SMS notifications

custom email auto-responder

integrated with MailChimp, AWeber, and five other email subscription services

anti-spam protection

and more

User Enrico333 says:

"Forms have been a pain for as long as I can recall - this has truly made my life easier."

Really, a feedback form is more limited in its function than a general contact form, but as contact forms can also be used to leave feedback, I thought why not include a bona fide feedback form in this list.

Feedback Form allows your users to rate your product or service and get the kind of in-depth feedback necessary to improve your business. Feedback Form is super easy to use and can be added to any website in the shortest amount of time.

Best features:

multi-purpose feedback form

fully customisable

pop-up form (no page reload)

form validation

and more

User diwep06 says:

"Thanks for the Script ... Ultra nice, simple to set up and no skill needed."

ContactPlus+ is a clean and simple contact form which comes in three styles: an unstyled version that you can build to suit your taste, a normal form with just the essential information needed on a contact form, and a longer form to accommodate an address.

Best features:

Captcha verification

successful submission message

two styled versions and one unstyled version

and more

User itscody says:

"He went above and beyond to make sure this worked as I wanted with the overly complicated design of my website."

Though not the prettiest contact form in this list, Easy Contact Form With Attachments is certainly one of the easiest to add to your site. Furthermore, configuration requires just your email address and company info. The form offers five different themes to choose from and, as the name suggests, allows you to send file attachments.

Best features:

attachment file size limit can be adjusted up from default of 5MB

user friendly with one-click human verification against spam bots

optional phone number field and company information

error messages can be easily modified

and more

User powerj says:

"Excellent support! Great code quality and best customer service!"

Conclusion

These 12 Best Contact Form PHP Scripts just scratch the surface of products available at Envato Market, so if none of them fit your needs, there are plenty of other great options you may prefer.

]]>2018-04-16T12:53:19+00:00//www.4elements.com/blog/read/getting_started_with_the_mojs_animation_library_the_shapeswirl_and_stagger_
https://www.4elements.com/blog/read/getting_started_with_the_mojs_animation_library_the_shapeswirl_and_stagger_#When:12:00:00ZThe first and second tutorials of this series covered how to animate different HTML elements and SVG shapes using mojs. In this tutorial, we will learn about two more modules which can make our animations more interesting. The ShapeSwirl module allows you to add a swirling motion to any shape that you create. The stagger module, on the other hand, allows you to create and animate multiple shapes at once.

Using the ShapeSwirl Module

The ShapeSwirl module in mojs has a constructor which accepts all the properties of the Shape module. It also accepts some additional properties which allow it to create a swirling motion.

You can specify the amplitude or size of the swirl using the swirlSize property. The oscillation frequency during the swirling motion is determined by the value of the swirlFrequency property. You can also scale down the total path length of the swirling shape using the pathScale property. Valid values for this property range between 0 and 1. The direction of the motion can be specified using the direction property. Keep in mind that direction only has two valid values: -1 and 1. The shapes in a swirling motion will follow a sinusoidal path by default. However, you can animate them along straight lines by setting the value of isSwirl property to false.

Besides these additional properties, the ShapeSwirl module also changes the default value of some properties from the Shape module. The radius of any swirling shape is set to 5 by default. Similarly, the scale property is set to be animated from 1 to 0 in the ShapeSwirl module.

In the following code snippet, I have used all these properties to animate two circles in a swirling motion.

In the following demo, you can click on the Play button to animate two circles, a triangle and a cross in a swirling motion.

Using the Stagger Module

Unlike all other modules that we have discussed so far, stagger is not a constructor. This module is actually a function which can be wrapped around any other module to animate multiple shapes or elements at once. This can be very helpful when you want to apply the same animation sequence on different shapes but still change their magnitude independently.

Once you have wrapped the Shape module inside the stagger() function, you will be able to specify the number of elements to animate using the quantifier property. After that, you can specify the value of all other Shape related properties. It will now become possible for each property to accept an array of values to be applied on individual shapes sequentially. If you want all shapes to have the same value for a particular property, you can just set the property to be equal to that particular value instead of an array of values.

The following example should clarify how the values are assigned to different shapes:

We begin by wrapping the Shape module inside the stagger() function. This allows us to create multiple shapes at once. We have set the value of the quantifier property to 5. This creates five different shapes, which in our case are polygons. Each polygon is a triangle because the default value of the points property is 3. We have already covered all these properties in the second tutorial of the series.

There is only one value of fill, stroke, and strokeWidth. This means that all the triangles will be filled with yellow and will have a black stroke. The width of stroke in each case would be 5px. The value of the radius property, on the other hand, is an array of five integers. Each integer determines the radius of one triangle in the group. The value 20 is assigned to the first triangle, and the value 60 is assigned to the last triangle.

All the properties have had the same initial and final values for the individual triangles so far. This means that none of the properties would be animated. However, the value of the x property is an array of objects containing the initial and final value of the horizontal position of each triangle. The first triangle will translate from x:0 to x:100, and the last triangle will translate from x:0 to x:300. The animation duration in each case would be 2000 milliseconds.

If there is a fixed step between different values of a property, you can also use stagger strings to specify the initial value and the increments. Stagger strings accept two parameters. The first is the start value, which is assigned to the first element in the group. The second value is step, which determines the increase or decrease in value for each successive shape. When only one value is passed to the stagger string, it is considered to be the step, and the start value in this case is assumed to be zero.

You can also assign random values to different shapes in a group using rand strings. You just have to supply a minimum and maximum value to a rand string, and mojs will automatically assign a value between these limits to individual shapes in the group.

In the following example, we are using the rand strings to randomly set the number of points for each polygon. You may have noticed that the total number of polygons we are rendering is 25, but the fill array only has four colors. When the array length is smaller than the value of the quantifier, the values for different shapes are determined by continuously cycling through the array until all the shapes have been assigned a value. For example, after assigning the color of the first four polygons, the color of the fifth polygon would be orange, the color of the sixth polygon would be yellow, and so on.

The stagger string sets the radius of the first polygon equal to 10 and then keeps increasing the radius of subsequent polygons by 1. The horizontal position of each polygon is similarly increased by 20, and the vertical position is determined randomly. The final angle value for each polygon is randomly set between -120 and 120. This way, some polygons rotate in a clockwise direction while others rotate in an anti-clockwise direction. The angle animation is also given its own easing function, which is different from the common animation of other properties.

Final Thoughts

We covered two more mojs modules in this tutorial. The ShapeSwirl module allows us to animate different shapes in a swirling motion. The stagger module allows us to animate multiple shape elements at once.

Each shape in a stagger group can be animated independently without any interference from other shapes. This makes the stagger module incredibly useful. We also learned how to use stagger strings to assign values with fixed steps to properties of different shapes.

If you have any questions related to this tutorial, please let me know in the comments. We will learn about the Burst module in the next tutorial of this series.

For additional resources to study or to use in your work, check out what we have available in the Envato Market.

]]>2018-04-16T12:00:00+00:00//www.4elements.com/blog/read/getting_started_with_the_mojs_animation_library_the_shape_module
https://www.4elements.com/blog/read/getting_started_with_the_mojs_animation_library_the_shape_module#When:12:00:00ZIn the previous tutorial, we used mojs to animate different HTML elements on a webpage. We used the library to mainly animate div elements which looked like squares or circles. However, you can use the Html module to animate all kinds of elements like images or headings. If you actually intend to animate basic shapes using mojs, you should probably use the Shape module from the library.

The Shape module allows you to create basic shapes in the DOM using SVG. All you have to do is specify the type of shape that you want to create, and mojs will take care of the rest. This module also allows you to animate different shapes that you create.

In this tutorial, we will cover the basics of the Shape module and how you can use it to create different shapes and animate them.

Creating Shapes in Mojs

You need to instantiate a mojs Shape object in order to create different shapes. This object will accept different parameters which can be used to control the color, size, angle, etc. of the shapes that you create.

By default, any shape that you create will use the document body as its parent. You can specify any other element as its parent using the parent property. You can also assign a class to any shape that you create with the help of the className property. The library will not assign any default class if you skip this property.

Mojs has eight different shapes built in so that you can create them directly by setting a value for the shape property. You can set its value to circle to create circles, rect to create rectangles, and polygon to create polygons. You can also draw straight lines by setting the value of shape to be lines. The library will draw two perpendicular lines if the shape value is cross and a number of parallel lines if the shape is equal. Similarly, zigzag lines can be created by setting the property value to zigzag.

The shape object also has a points property which has different meanings for different shapes. It determines the total number of sides in a polygon and the total number of parallel lines in an equal shape. The points property can also be used to set the number of bends in a zigzag line.

As I mentioned earlier, mojs creates all these shapes using SVG. This means that the Shape object will also have some SVG specific properties to control the appearance of these shapes. You can set the fill color of a mojs shape using the fill property. When no color is specified, the library will use the deeppink color to fill the shape. Similarly, you can specify the stroke color for a shape using the stroke property. When no stroke color is specified, mojs keeps the stroke transparent. You can control the fill and stroke opacity for a shape using the fillOpacity and strokeOpacity properties. They can have any value between 0 and 1.

Mojs allows you to control other stroke-related properties of a shape as well. For instance, you can specify the pattern of dashes and gaps in a stroke path using the strokeDasharray property. This property accepts both strings and numbers as valid values. Its default value is zero, which means that the stroke will be a solid line. The width of a stroke can be specified using the strokeWidth property. All the strokes will be 2px wide by default. The shape of different lines at their end points can be specified using the strokeLinecap property. Valid values for strokeLinecap are butt, round, and square.

Any shape that you create is placed at the center of its parent element by default. This is because the left and right properties for a shape are set to 50% each. You can change the values of these properties to place the elements in different locations. Another way to control the position of a shape is with the help of the x and y properties. They determine how much a shape should be shifted in the horizontal and vertical direction respectively.

You can specify the radius of a shape using the radius property. This value is used to determine the size of a particular shape. You can also use radiusX and radiusY to specify the size of a shape in a particular direction. Another way of controlling the size of a shape is with the help of the scale property. The default value of scale is 1, but you can set it to any other number you like. You can also scale a shape in a particular direction using the scaleX and scaleY properties.

The origin of all these transformations of a shape is its center by default. For example, if you rotate any shape by specifying a value for the angle property, the shape will rotate around its center. If you want to rotate a shape around some other point, you can specify it using the origin property. This property accepts a string as its value. Setting it to '0% 0%' will rotate, scale or translate the shape around its top left corner. Similarly, setting it to '50% 0%' will rotate, scale or translate the shape around the center of its top edge.

You can use all these properties we just discussed to create a large variety of shapes. Here are a few examples:

The shapes created by the above code are shown in the CodePen demo below:

Animating Shapes in Mojs

You can animate almost all the properties of a shape that we discussed in the previous section. For instance, you can animate the number of points in a polygon by specifying different initial and final values. You can also animate the origin of a shape from '50% 50%' to any other value like '75% 75%'. Other properties like angle and scale behave just like they did in the Html module.

The duration, delay and speed of different animations can be controlled using the duration, delay and speed properties respectively. The repeat property also works like it did in the Html module. You can set it to 0 if you want to play the animation only once. Similarly, you can set it to 3 to play the animation 4 times. All the easing values that were valid for the Html module are also valid for the Shape module.

The only difference between the animation capabilities of these two modules is that you cannot specify the animation parameters individually for the properties in the Shape module. All the properties that you are animating will have the same duration, delay, repetitions, etc.

Here is an example where we animate the x position, scale and angle of a circle:

One way to control the playback of different animations is by using the .then() method to specify a new set of properties to be animated after the first animation sequence has fully completed. You can give all animation properties new initial and final values inside .then(). Here is an example:

Final Thoughts

In this tutorial, we learned how to create different shapes using mojs and how to animate the properties of these shapes.

The Shape module has all the animation capabilities of the Html module. The only difference is that the properties cannot be animated individually. They can only be animated as a group. You can also control the animation playback by using different methods to play, pause, stop and resume the animations at any point. I covered these methods in detail in the first tutorial of the series.

If you have any questions related to this tutorial, feel free to post a comment. In the next tutorial, you will learn about the ShapeSwirl and stagger modules in mojs.

]]>2018-04-13T12:00:00+00:00//www.4elements.com/blog/read/getting_started_with_the_mojs_animation_library_the_html_module
https://www.4elements.com/blog/read/getting_started_with_the_mojs_animation_library_the_html_module#When:12:00:00ZA lot of websites now use some sort of animation to make their landing pages more appealing. Thankfully, there are many libraries which allow you to animate elements on a webpage without doing everything from scratch. In this tutorial, you will learn about one such library called mojs.

The library is very easy to use because of its simple declarative API. The animations that you can create with mojs will all be smooth and retina ready so that everything looks professional.

Installing Mojs

There are many ways to include mojs in your projects. You can grab the library from its GitHub repository. Alternatively, you can directly include a link to the latest version of the library from different CDNs in your project.

<script src="//cdn.jsdelivr.net/mojs/latest/mo.min.js"></script>

Developers can also install mojs using package managers like npm and bower by running the following command:

npm install mo-js
bower install mojs

Once you have included the library in your project, you can start using different modules to create interesting animations.

The HTML Module in Mojs

This tutorial will focus on the HTML module in the mojs library. This module can be used to animate different HTML elements on the webpage.

The first thing that you need to do in order to animate a DOM element is create a mojs Html object. You can specify the selector of an element and its properties that you want to animate inside this object.

Setting a value for el will let you identify the element which you want to animate using mojs. You can either set its value as a selector or a DOM node.

The HTML module has some predefined attributes which can be used to animate different transform-related properties of a DOM element. For example, you can control the translation animation of an element along the x, y and z axes by specifying start and end values for the x, y and z properties. You can also rotate any element along different axes by using the angleX, angleY and angleZ properties. This is similar to the corresponding rotateX(), rotateY() and rotateZ() transforms in CSS. You can also skew an element along the X or Y axis with the help of the skewX and skewY properties.

Applying scaling animations on different elements is just as easy. If you want to scale an element in both directions, simply set a value for the scale property. Similarly, you can animate the scaling of elements along different axes by setting appropriate values for the scaleX and scaleY properties.

Besides all these properties which let you set the initial and final values of the animation, there are some other properties which control the animation playback. You can specify the duration of the animation using the duration property. The provided value needs a number, and it will set the animation duration in milliseconds. If you want to start an animation after some delay, you can set the delay value using the delay property. Just like duration, delay also expects its value to be a number.

Animations can be repeated more than once with the help of the repeat property. Its default value is zero, which means that the animation would be played only once. Setting it to 1 will play the animation twice, and setting it to 2 will play the animation three times. Once the animation has completed its first iteration, the element will go back to its initial state and start animating again (if you have set a value for the repeat property). This sudden jump from the final state to initial state may not be desirable in all cases.

If you want the animation to play backwards once it has reached the final state, you can set the value of the isYoyo property to true. It is set to false by default. Finally, you can set the speed at which the animation should be played using the speed property. Its default value is 1. Setting it to 2 will play the animation twice as fast. Similarly, setting it to 0.5 will play the animation at half the speed.

The mojs Html objects that you created will not animate the respective elements by themselves. You will have to call the play() method on each mojs Html animation that you want to play. Here is an example which animates three different boxes using all the properties we just discussed:

You are not limited to just animating the transform properties of an element. The mojs animation library allows you to animate all other animatable CSS properties as well. You just have to make sure that you provide valid initial and final values for them. For instance, you can animate the background color of an element by specifying valid values for background.

If the CSS property that you want to animate contains a hyphen, you can remove the hyphen and convert the property name to camelCase when setting initial and final values inside the mojs Html object. This means that you can animate the border-radius by setting a valid value for the borderRadius property. The following example should make everything clear:

In the above example, the border color changes from black to red while the border radius animates from 0 to 50%. You should note that a unitless number will be considered a pixel value, while a number with units should be specified as a string like '50%'.

So far we have used a single set of tween properties to control the playback of different animations. This meant that an element would take the same time to move from x:0 to x:200 as it will take to scale from scale:1 to scale:2.

This may not be a desirable behavior as you might want to delay the animation of some properties and control their duration as well. In such cases, you can specify the animation playback parameters of each property inside the property object itself.

Easing Functions Available in Mojs

Every animation that you create will have the sin.out easing applied to it by default. If you want the animation playback to progress using a different easing function, you can specify its value using the easing property. By default, the value specified for easing is also used when the animation is playing backwards. If you want to apply a different easing for backward animations, you can set a value for the backwardEasing property.

The mojs library has 11 different built-in easing functions. These are linear, ease, sin, quad, cubic, quart, quint, expo, circ, back, and elastic. The linear easing only has one variation named linear.none. This makes sense because the animation will progress with the same speed at different stages. All other easing functions have three different variations with in, out and inout appended at the end.

There are two methods to specify the easing function for an animation. You can either use a string like elastic.in or you can access the easing functions directly using the mojs.easing object like mojs.easing.elastic.inout. In the embedded CodePen demo, I have applied different easing functions on each bar so its width will change at a different pace. This will give you an idea of how the animation speed differs with each easing.

Since we only want to change the easing function applied to each box, I have created a loop to iterate over them and then apply an easing function picked up from the easings array. This prevents unnecessary code duplication. You can use the same technique to animate multiple elements where the property values vary based on a pattern.

Controlling Animation Playback

Mojs provides a lot of methods which allow us to control the animation playback for different elements once it has already started. You can pause the animation at any time by calling the pause() method. Similarly, you can resume any paused animation by calling the resume() method.

Animations that have been paused using pause() will always resume from the point at which you called pause(). If you want the animation to start from the beginning after it has been paused, you should use the stop() method instead.

You can also play the animation backwards using the playBackward() method. Earlier, we used the speed property to control the speed at which mojs played an animation. Mojs also has a setSpeed() method which can set the animation speed while it is still in progress. In the following example, I have used all these methods to control the animation playback based on button clicks.

In the following demo, the current animation playback speed is shown in the black box in the bottom left corner. Clicking on Faster will increase the current speed by 1, and clicking on Slower will halve the current speed.

Final Thoughts

In this tutorial, we learned how to animate different DOM elements on a webpage using the HTML module in mojs. We can specify the element we want to animate using either a selector or a DOM node. The library allows you to animate different transform properties and the opacity of any element using the built-in properties of the mojs Html object. However, you can also animate other CSS properties by specifying the name using camelCase notation.

JavaScript is not without its learning curves, and there are plenty of frameworks and libraries to keep you busy, as well. If you’re looking for additional resources to study or to use in your work, check out what we have available in the Envato Market.

Let me know if there is anything you would like me to clarify in this tutorial. We will cover the Shape module from the mojs library in the next tutorial.

]]>2018-04-11T12:00:00+00:00//www.4elements.com/blog/read/project_management_considerations_for_your_wordpress_project
https://www.4elements.com/blog/read/project_management_considerations_for_your_wordpress_project#When:12:00:00ZLean, Agile, Waterfall; there are dozens of project management methodologies out there, and each one works to abstract your project into a common series of tasks and formulas.

When it comes to software engineering, this can become complicated. For instance, it can cause issues between developers and managers whose organization styles differ. The manager needs that layer of abstraction to keep track of necessary metrics. The developer, however, can suffer from continual small task fatigue and feelings of being micromanaged.

Regardless of the programming language, framework, or libraries, none of them will perfectly fit into the variety of project management methodologies that exist. So how do we improve processes?

By categorizing the differences between tools. Let’s dig into the distinct features that comprise WordPress, and how they can impact the perspectives of managers and developers.

How to Adapt Your Project Management System to WordPress

To adapt our system, we first have to understand the nuances of WordPress. Of course, we don’t need to take every coding standard or functionality difference into account, but we do need to refer to significant sections that may make a difference. We’ll group these into three categories:

Challenges: Any piece that needs to be planned around when defining tasks, milestones, and implementations for the project.

Risks: Large issues that should be hedged against when possible. These are likely weaknesses in the framework that may push back development if they come to fruition.

Opportunities: Unique benefits in the framework that may provide additional features, make development more efficient, or in some way provide a competitive or internal advantage.

The difficulty with identifying these sections is that while they can mostly be learned through research and preparation, many are simply experienced during the attempt. In addition, defining them requires critical evaluation from both developers and managers, which may not always occur.

To adapt your current project management system to WordPress, let’s take a look at the unique Challenges, Risks, and Opportunities that are commonly faced.

Unique Challenges of Using WordPress

Every Content Management System by nature has its own set of downsides. With the involvement of different parties possessing different goals, compromises are bound to happen. Whether it’s users sacrificing customization or developers losing maintenance ease, something has to give. Here are some of the challenges using WordPress presents:

Using an Open-Source Base

Having an open-source base brings with it a bevy of pros and cons. As far as the challenges that are brought on by this, here are the most important:

Code-Base Maintenance

WordPress’s open-source base means that you’ll benefit from regular improvements to the system, but have very little control over those improvements. If a particular bug or feature change is an issue with your build, there is no guarantee of when it will be dealt with. Of course, you can always contribute to the base itself to speed things along, but with so many users, your addition may not be approved. After all, what you have in mind may not be the best solution for most users.

Dealing With Updates

To combat this, you can modify your own codebase or extend it as necessary, but this creates a new set of challenges. If you’ve created a workaround, you will need to be aware of changes to the central codebase that may alter or correct your solution in the future. If you’ve modified the codebase, you will need to be aware that updating WordPress core may alter the functionality that you’ve built, and plan accordingly.

Building Non-Generalist Sites

Because of the sheer number of websites that rely on WordPress, it’s likely that there will come a time when your site and the future of WordPress might be at odds. This becomes more true as your site moves away from what a typical WordPress site might look like.

To counteract this, try to work within WordPress’s constraints as much as possible, to minimize any issues that might arise from future updates. If while planning your project a large portion seems to be fighting the core rather than benefiting from it, consider using another CMS. Otherwise, you can also advise clients against updating WordPress after the project launches, though that brings with it a new set of challenges.

“Piecemeal” Development

The last major challenge to be aware of is the separation of components within WordPress. The divided structure of plugins, themes, and core can be a great tool for planning and hierarchy, but introduces additional third-party software.

Plugins and themes that are being used, but have not been created in-house, should receive an extra level of care. Take the time to do a proper discovery of these components to deal with possible complications.

Unique Risks of Using WordPress

Risks are a level beyond challenges, typically indicating issues that could be catastrophic to a project or whose solutions rest outside of development itself. Take a look at the two biggest that I’ve run into:

Security Issues

With code coming from multiple sources, it’s inevitable that sometimes a bug or exploit will come to light that might leave your project vulnerable. While these issues are typically fixed within days of exposure, the time in-between can be especially hazardous.

Because of the large number of sites using WordPress, exploits become well known quickly and can potentially be utilized en masse. Making sure that your project uses a variety of security measures can help to reduce the risk during those couple of days, but sometimes the only solution is to wait for a patch.

Inclusion of Third-Party Projects

Plugins are one of the most important features for many WordPress users. On the development side, however, plugins introduce unknown elements. Since they can be upgraded separately from the rest of the system (and potentially by your client), utilizing plugins as a key component in your project could be problematic later on.

Additionally, plugins need to be properly vetted before inclusion, otherwise you risk the potential of including dangerous code within your project.

Unique Benefits of Using WordPress

WordPress may have its own risks and challenges, but it has plenty of benefits as well. After all, it’s the most popular CMS on the web for a reason. Here are the pros to the cons above:

Using an Open-Source Base

We talked about the downsides of an open-source base, but there are many upsides as well. Using WordPress is free, and it boasts a wide range of documentation as well as extensive tutorials around the internet. This means that developers can quickly get up to speed on your project, and expanding your team’s knowledge during a project isn’t as arduous a task.

The other major benefit of the open-source base is the multitudes of people that work together to make it happen. A team of a handful of individuals could make something similar, but it’s unlikely to happen at the same pace and quality as WordPress.

Having many varied developers contributing to the code, paired with structured reviews, means that your projects are built on a solid, quality source. Having a large number of contributors also speeds along production, allowing features to be added quickly and patches to be issued in limited timeframes.

Robust Third-Party Solution Availability

WordPress boasts an extensive array of plugins, themes, and code snippets that can help streamline the production process. By utilizing these third-party solutions, you can quickly prototype—and even implement—entirely finished components into your project, offering additional features and efficiency.

Even if a plugin doesn’t quite do what you want, the most popular ones adhere to WordPress coding standards, making them easily adaptable to your needs.

Compartmentalized Design

A predefined and well-structured hierarchy and template system can help projects start off in an organized way. Instead of spending time deciding on engineering structures, WordPress allows efficient work within a well-established system. In addition, it’s suitable for most project management systems and allows for multiple pieces of the project to be developed simultaneously.

This compartmentalized design also makes it easy to determine where issues originate, and to maintain code throughout a project’s iterations.

Aligning Team Perspectives

Taking a Content Management System like WordPress and breaking it down into how managers and developers perceive it can streamline communication overall. Integrating these perspectives in your project management style should alleviate some anxiety with your developers. It gives them the benefit of the doubt, while adding some much-needed understanding to the team.

If you're looking for other utilities to help you build out your growing set of tools for WordPress or for code to study and become more well-versed in WordPress, don't forget to see what we have available in Envato Market.

Did I miss any key parts of WordPress that project managers should be aware of? Let me know in the comments!

In this tutorial, I will cover the basics of the CSS grid layout with example scenarios. CSS Grid is supported by almost all modern browsers now, and it is ready to be used in production. Unlike other layout methods such as flexbox, the grid layout gives you two degrees of freedom, which makes it so versatile that positioning the elements is just a breeze.

HTML Structure for the CSS Grid Layout

In order to use the CSS Grid layout, your HTML elements should have a specific structure.

You need to wrap the elements that you want to control within a parent container DIV.

Repeat a grid-template Pattern

If you have a repeating pattern for grid-template,you can just use repeat and tell it how many times to repeat the same pattern.

For instance, say you have 12 elements, and you want to lay them out horizontally with equal width. You could repeat 1fr 12 times inside grid-template-columns, which is not effective. So, instead, you can use repeat(12, 1fr).

Jest is an open JavaScript testing library from Facebook. Its slogan is "Delightful JavaScript Testing". While Jest can be used to test any JavaScript library, it shines when it comes to React and React Native.

This is no surprise as both React and Jest come from Facebook, which is a major user of both. In this tutorial I'll show you eight different aspects of Jest that make it such a delight for testing React applications.

This is pretty simple. But if you use react-create-app to create your React project, you don't even have to do that. The jest package comes bundled in, and you can just start writing tests immediately.

2. Jest Is Lightning Fast

Jest is fast. Very fast. When your tests are CPU bound, it can shave significant time from your test runs. Airbnb switched from Mocha to Jest, and their total test runtime dropped from more than 12 minutes to only 4.5 minutes on a heavy-duty CI machine with 32 cores. Local tests used to take 45 minutes, which dropped to 14.5 minutes.

What makes Jest so fast? It's a combination of several factors:

Parallelization: this is pretty obvious, and other test frameworks use it too.

Run slowest tests first: this ensures all cores are utilized to the max.

Caching babel transforms: reduces CPU-intensive babel transforms.

3. Jest Is a One-Stop Shop

Jest comes with built-in matchers, spies, and its own extensive mocking library. It used to be based on Jasmine, so it inherited all of Jasmine's goodness. But in more recent versions Jest departed from Jasmine, yet kept the same functionality and added its own flavor and improvements.

When comparing it to a bespoke testing solution based on Mocha, it's clear that ease of use is a major concern of Jest design.

4. Jest Has Awesome Mocks

Mocking is an incredibly important part of unit testing. This is especially important if you care about fast tests (and who doesn't?).

Mocking allows you to replace irrelevant dependencies that may be slow and even control time for code that relies on timing. Jest lets you fully control your dependencies and master time.

Simple Mock Functions

Mocking dependencies is a long-time tradition of unit tests. If your code is reading a file, writing to a file, calls some remote service or is accessing a database, it may be slow and it may be complicated to configure and clean up after the test. When running in parallel, it may not even be possible to control properly.

In these cases, it is better to replace the real dependency with a mock function that does nothing but just records the fact it was called, so you can verify the workflow. The jest.fn() mock function lets you provide canned return values (for multiple consecutive calls), and it records how many times it was called and what the parameters were in each call.

Manual Module Mocks

Sometimes you may need to replace a whole module with its data rather than a couple of functions. Jest lets you do that by placing your own module with the same name in a __mocks__ sub-directory.

Whenever your code is using the target module, it will access your mock rather than the real module. You can even selectively choose for some tests to use the original module by calling jest.Unmock('moduleName').

Timer Mocks

Timing is the bane of unit tests. What if you want to test code that times out after a minute? Code that fires every 30 seconds? Special code that runs a reconciliation algorithm at the end of the month?

Those are difficult to test. You can either succumb to the timing requirements of the original code (and then your tests will be very slow), or you can manipulate time, which is much more useful. Jest lets you control the following timer-related functions:

ES6 Class Mocks

Automatic mock: lets you spy on calls to constructor and all methods, but always returns undefined.

Manual mock: implement your own mock in the __mocks__ sub-directory.

Mock the class factory with a higher-order function.

Selective mocking using mockImplementation() or mockImplementationOnce().

5. Jest Supports TypeScript

TypeScript is a popular typed superset of JavaScript that compiles to plain JavaScript. Jest supports TypeScript via the ts-jest package. It describes itself as a TypeScript preprocessor with source map support for Jest and has a very active community.

6. Jest Has Got You Covered

Jest has built-in coverage reports. Your tests are only as good as their coverage. If you test only 80% of your code, then bugs in the other 20% will be discovered only in production.

Sometimes, it makes sense from a business perspective to skip testing for some parts of the system. For example, internal tools that your own expert engineers use and change frequently may not need the same level of rigorous testing as your production code. But, at any rate, this should be a conscious decision, and you should be able to see exactly the test coverage of different parts of your system.

Here is how to generate a coverage report for the simple palindrome example:

7. Jest Does Snapshots

Snapshot testing is great. It lets you capture a string that represents your rendered component and store it in a file. Then you can compare it later to ensure that the UI didn't change. While it is ideal for React and React Native apps, you can use snapshots for comparing serialized values of other frameworks. If you actually change your UI then you need to update your snapshot files to reflect it of course.

8. Jest Does Delta Testing With Watch

Jest can run in watch mode, where it runs the tests automatically whenever you change the code. You run it with the --watchAll command-line argument, and it will monitor your application for changes. I ran jest in watch mode and introduced a bug on purpose to palindrome.js, and here is the result:

Conclusion

Jest is a fast testing framework that's easy to set up. It is actively developed and used by Facebook to test all their React applications as well as by many other developers and companies.

It has all you need in one convenient package, supports TypeScript, and IMO is the best option for React and React Native application testing. It is also very to migrate from other testing solutions.

Remember, React has grown in popularity. In fact, we have a number of items in the Envato Market that are available for purchase, review, implementation, and so on. If you’re looking for additional resources around React, don’t hesitate to check them out.

]]>2018-04-02T12:00:00+00:00//www.4elements.com/blog/read/creating_an_image_editor_using_camanjs_layers_blend_modes_and_events
https://www.4elements.com/blog/read/creating_an_image_editor_using_camanjs_layers_blend_modes_and_events#When:12:00:00ZIn the previous tutorial, you learned how to create an image editor using CamanJS which can apply basic filters like contrast, brightness, and noise to an image. CamanJS also has some other built-in filters like Nostalgia, Pinhole, Sunrise, etc., which we applied directly to the image.

In this tutorial, we will cover some more advanced features of the library like Layers, Blend Modes, and Events.

Layers in CamanJS

In the first tutorial, we only worked with a single layer which contained our image. All the filters that we applied only manipulated that main layer. CamanJS allows you to create multiple layers to enable you to edit your images in a more sophisticated manner. You can create nested layers, but they will always be applied on their parent layer like a stack.

Whenever you create a new layer and apply it on the parent layer, the default blend mode used will be normal. You can create a new layer on the canvas using the newLayer() method. When you create a new layer, you can also pass a callback function which will be useful if you intend to manipulate layer.

This function can be used for a lot of tasks like setting the blend mode for the new layer using the setBlendingMode() method. Similarly, you can control the opacity of the new layer using the opacity() method.

Any new layer that you create can be filled with a solid color using the fillColor() method. You can also copy the contents of the parent layer to the new layer using the copyParent() method. All the filters that we learned about in the previous tutorial can also be applied on the new layer that we are creating. For example, you can increase the brightness of the newly created layer by using this.filter.brightness(10).

Instead of copying the parent or filling the layer with a solid color, you also have the option to load any other image in the layer and overlay it on the parent. Just like the main image, you will be able to apply different filters to the overlaid image as well.

In the following code snippet, we have attached a click event handler to three buttons which will fill the new layer with a solid color, the parent layer, and an overlay image respectively.

Blend Modes in CamanJS

In the previous section, we kept the opacity of any new layer that we added to the canvas below 100. This was done because the new layer would otherwise hide the old layer completely. When you place one layer over another, CamanJS allows you to specify a blend mode which determines the final outcome after the placement. The blend mode is set to normal by default.

This means that any new layer that you add on the canvas will make the layer below it invisible. The library has ten blend modes in total. These are normal, multiply, screen, overlay, difference, addition, exclusion, softLight, exclusion, and darken.

As I mentioned earlier, the normal blend mode sets the final color to be equal to the color of the new layer. The multiply blend mode determines the final color of a pixel by multiplying the individual channels together and then dividing the result by 255. The difference and addition blend modes work in a similar manner, but they subtract and add the channels.

The darken blend mode sets the final color of a pixel to be equal to the lowest value of individual color channels. The lighten blend mode sets the final color of a pixel to be equal to the highest value of individual color channels. The exclusion blend mode is somewhat similar to difference, but it sets the contrast to a lower value. In the case of the screen blend mode, the final color is obtained by inverting the colors of each layer, multiplying them, and then again inverting the result.

The overlay blend mode acts like multiply if the bottom color is darker, and it acts like screen if the bottom color is lighter.

If you want the colors in different layers to interact in a different manner, CamanJS also lets you define your own blend modes. We will cover this in the next tutorial of the series.

Here is the JavaScript code to apply different blend modes on an image:

In the above code snippet, we get the Hex color value from an input field. This color is then applied on the new layer. You can write the code to apply other blend modes in a similar manner.

Try to specify a color of your choice in the input field, and then apply any of the blend modes by clicking on the respective button. I have applied the blend modes on a solid color in the example, but you can also apply them on an overlaid image from the previous section.

Events in CamanJS

If you uploaded any large image in either the demo of the first tutorial or the second tutorial, you might have noticed that the result of any applied filter or blend mode became evident after a long time.

Large images have a lot of pixels, and calculating the final value of different channels for each pixel after applying a specific blend mode can be very time-consuming. For example, when applying the multiply blend mode on an image with dimensions 1920*1080, the device will have to perform multiplication and division over 6 million times.

In such cases, you can use events to give users some indication about the progress of a filter or blend mode. CamanJS has five different events which can be used to execute specific callback functions at different stages. These five events are processStart, processComplete, renderFinished, blockStarted, and blockFinished.

The processStart and processComplete events are triggered after a single filter starts or finishes its rendering process. When all the filters that you specified have been applied on an image, the library fires the renderFinished event.

CamanJS divides large images into blocks before starting to manipulate them. The blockStarted and blockFinished events are fired after individual blocks of the image have been processed by the library.

In our example, we will only be using the processStart and renderFinished events to inform users about the progress of our image editing operation.

With the processStart and processFinish events, you get access to the name of the process currently operating on the image. The blockStarted and blockFinished events, on the other hand, give you access to information like the total number of blocks, the current block being processed, and the number of finished blocks.

Try clicking on any button in the demo below, and you will see the name of the process currently manipulating the image in the area below the canvas.

Final Thoughts

The first tutorial of the series showed you how to create a basic image editor with built-in filters from the CamanJS library. This tutorial showed you how to work with more than one layer and apply different filters and blend modes to each layer individually.

Since the image editing process can take a while for large images, we also learned how to indicate to users that the image editor is actually processing the image and not sitting idle.

In the next and final tutorial of the series, you will learn how to create your own blend modes and filters in CamanJS. If you have any questions related to this tutorial, feel free to let me know in the comments.

GraphQL is an emerging technology for creating APIs and sharing data between the server and front-end. In our new short course, Code a Front-End App With GraphQL and React, you'll learn how to connect to a GraphQL endpoint from a React app.

What You’ll Learn

In this quick, 50-minute course, Markus Mühlberger will show you how to configure the popular Apollo GraphQL client. This will let you seamlessly retrieve and integrate live server data into your app.

You'll learn how to structure your queries, access real-time data, perform mutations (updates to the server data), and handle errors. Along the way, you'll build a great-looking trip planning map for the Finnish public transportation system!

Watch the Introduction

Take the Course

You can take our new course straight away with a subscription to Envato Elements. For a single low monthly fee, you get access not only to this course, but also to our growing library of over 1,000 video courses and industry-leading eBooks on Envato Tuts+.

Plus you now get unlimited downloads from the huge Envato Elements library of 460,000+ creative assets. Create with unique fonts, photos, graphics and templates, and deliver better projects faster.

]]>2018-03-28T12:23:56+00:00//www.4elements.com/blog/read/creating_an_image_editor_using_camanjs_applying_basic_filters
https://www.4elements.com/blog/read/creating_an_image_editor_using_camanjs_applying_basic_filters#When:12:00:00ZA while back, I wrote some tutorials which described how to apply different kinds of filters and blend modes to an image using just CSS. This could be very helpful in situations where you want to show the grayscale, blurred, or high-contrast version of the same image. Instead of creating four different images, you could just apply these effects to the original image using a few lines of CSS.

Using CSS filters and blend modes works nicely in most cases. However, CSS doesn't modify the pixels of the image itself. In other words, the filters and blend modes or any other effects are not permanent.

If someone downloads an image with CSS filters applied to it, they will get the original image and not the modified version. This can be a major setback if you were planning on creating an image editor for your users.

If you want the image modifications to be permanent and allow the user to download the modified image, you can use HTML5 canvas. The canvas element allows you to do a lot of things, including drawing lines and shapes, writing text, and rendering animations.

In this tutorial, we will focus on editing images loaded on the canvas. CSS3 already has built-in functionality to allow you to apply effects like contrast, brightness, and blurring directly. When working with HTML5 canvas, we will use a canvas manipulation library called CamanJS to edit the images.

The library supports basic effects like brightness, contrast, and saturation out of the box. This will save time and allow us to create more sophisticated filters based on these basic ones.

CamanJS Basics

The name of this library is based on the fact that it is used for doing (ca)nvas (man)ipulation in JavaScript(JS). Before you can start using different features of the library, you will have to include it in your project. This can be done either by downloading the library and hosting it yourself or by linking directly to a CDN.

There are two ways to use the library. The first option is to use the data-caman attribute with your image elements. This attribute can accept a combination of different CamanJS filters as its value. For example, if you want to increase the brightness of an image by 20 and the contrast by 10, you can use the following HTML:

Similarly, you can apply other filters like saturation, exposure, noise, sepia, etc. Besides the basic filters, CamanJS also gives you access to some more sophisticated filters out of the box. These filters can be applied to an image in a similar manner. To apply the sunrise filter, you can simply use the following HTML:

<img src="path/to/image.jpg"
data-caman="sunrise()">

Your second option for manipulating images is by calling Caman() with the id of the canvas where you have rendered the image and different filters that you want to apply to the rendered image.

In this series, we will be going the JavaScript way to create our image editor.

Implementing Upload and Download Functionality

You need to provide users with a way to upload the images they want to edit so that you can render them on the canvas for further manipulation. Once the users have made the changes, they should also be able to download the edited images. In this section, we will add these two functions to our image editor.

Let's begin with the HTML needed to add the canvas and upload/download buttons:

We begin by creating some variables to store the name of the image file selected by the user and the context for our canvas. After that, we write the code to get the image file from the file input after its change event is fired. The files selected by a user are stored in a FileList, and we can get the first file from the list using .files[0].

Once we have the file, we use a FileReader object to read the contents of the file selected by the user. The onload event for the FileReader is triggered after the selected file has been read successfully.

Inside the onload event handler for the FileReader object, we create an HTMLImageElement instance using the Image() constructor. The src attribute of the image is then set to the value of the result property of our FileReader.

Once the image has loaded successfully, we set the width and height of our canvas to be equal to the width and height of the image selected by the user. After that, we draw the image on the canvas and remove the data-caman-id attribute from the canvas.

The attribute is added automatically by CamanJS when setting up the canvas for editing an image. We need to remove it every time a user selects a new file in order to avoid any mixup between the old image file we were editing and the new file selected by the user.

Besides loading the image file in the canvas, we have also set the value of the fileName variable to be equal to the name of the file selected by the user. This will be useful when we are saving the edited image.

Users will now be able to upload different images in your image editor. Once they have edited the image, they would also like to download them. Let's write some code that will allow users to save the edited image file.

We use the jQuery .on() method to execute a piece of code every time the click event is fired for the download button. This code removes the file extension from the name of the image file selected by the user and replaces it with the suffix -edited.jpg. This name is then passed to the download function along with a reference to the canvas where we rendered and edited the image.

The download function creates a link and sets its download attribute to filename. The href attribute contains the data URI for the edited image. After setting the value of these two attributes, we programmatically fire the click event for our newly created link. This click starts the download of the edited image.

Applying Built-in Filters

As I mentioned in the beginning of the tutorial, CamanJS comes with basic built-in filters. So you can directly apply brightness, contrast, sepia, saturation, exposure, noise, sharpen, vibrance, and hue. Some filters like brightness and contrast can have a negative as well as a positive value.

You can make the values as high or as low as you want, but a sensible choice would be to keep them between -100 and 100. For example, the image becomes white when you set the brightness to 100. So any value above 100 will be useless. Similarly, the image will become completely gray if you set the value of the contrast to -100.

Other filters like sepia, noise, sharpen, and blur will only accept a positive value. Keep in mind that the hue filter covers the full 360-degree circle, with values ranging from 0 to 100. The image will look exactly the same when you set the hue to 0 or 100.

All the filters like brightness and contrast have been given increase and decrease buttons. However, the decrease button has been disabled for some filters like noise because they can't have a meaningful negative value.

We will apply the respective filters based on the button clicked with the help of the following JavaScript.

For the increase and decrease buttons, the strength of the filter is based on how its effect scales. For example, the brightness and contrast are increased by 10, but the value of gamma is only increased by 0.1 after each click.

The following CodePen demo shows the CamanJS image editor we created in action.

Some filters might take some time before you see their final outcome. In such cases, users might think that the filter is not working. We will be using events to keep readers updated about the progress of a filter. All this will be discussed in the next tutorial.

Final Thoughts

The first tutorial was meant to teach you how to create an image editor with basic image upload and download functionality which users can use to edit images. We used basic filters like noise, contrast, and brightness as well as some more complicated effects like Vintage and Nostalgia, which are built into the library.

In the next tutorial, you will learn about more CamanJS features like layers, blend modes, and events. And in the meantime, don't forget to review what we have available in the Envato Market for purchase, study, use, and review.

]]>2018-03-28T12:00:00+00:00//www.4elements.com/blog/read/creating_an_image_editor_using_camanjs_creating_custom_filters_and_blend_mo
https://www.4elements.com/blog/read/creating_an_image_editor_using_camanjs_creating_custom_filters_and_blend_mo#When:12:00:00ZIn the first tutorial of our CamanJS image editor series, we only used built-in filters to edit our images. This limited us to some basic effects like brightness, contrast, and 18 other more complicated filters with names like Vintage, Sunrise, etc. They were all easy to apply, but we were not in full control of the individual pixels of the image we wanted to edit.

In the second tutorial, we learned about layers and blend modes, which gave us more control over the images we were editing. For instance, you could add a new layer on the canvas, fill it with a color or image, and then place it over the parent layer with a blend mode applied to it. However, we were still not creating our own filters, and the blend modes we could apply were limited to the ones already provided by CamanJS.

The aim of this tutorial will be to teach you how to create your own blend modes and filters. We will also address some bugs present in the library and how you can patch them when using CamanJS in your own projects.

Creating New Blend Modes

By default, CamanJS offers ten blend modes. These are normal, multiply, screen, overlay, difference, addition, exclusion, softLight, lighten, and darken. The library also allows you to register your own blend modes. This way, you can control how the corresponding pixels of the current layer and parent layer mix together in order to produce the final result.

You can create a new blend mode using Caman.Blender.register("blend_mode", callback);. Here, blend_mode is the name that you want to use in order to identify the blend mode you are creating. The callback function accepts two parameters which contain the RGB values for different pixels on the current layer and corresponding pixels on the parent layer. The function returns an object with final values for the rgb channels.

Here is an example of a custom blend mode which sets the value of individual channels of a pixel to 255 if the value of that channel for the corresponding pixel in the parent layer is over 128. If the value is below 128, the final channel value is the result of subtracting the current layer channel value from the parent channel value. The name of this blend mode is maxrgb.

Let's create another blend mode in a similar manner. This time, the final channel values will be set to 0 if the channel value for the corresponding pixel in the parent layer is greater than 128. If the channel value for the parent layer is less than 128, the final result would be the addition of the channel values for the current layer and parent layer of the particular pixel. This blend mode has been named minrgb.

Creating New Pixel-Based Filters

There are two broad categories of filters in CamanJS. You can either operate on the whole image one pixel at a time or you can modify an image using a convolution kernel. A convolution kernel is a matrix which determines the color of a certain pixel based on the pixels around it. In this section, we will focus on pixel-based filters. Kernel manipulations will be covered in the next section.

Pixel-based filters are given the value of RGB channels for one pixel at a time. The final RGB values for that particular pixel are not affected by the surrounding pixels. You can create your own filters using Caman.Filter.register("filter_name", callback);. Any filter that you create must call the process() method. This method accepts the filter name and a callback function as parameters.

The following code snippet shows you how to create a pixel-based filter which turns images greyscale. This is done by calculating the luminescence of each pixel and then setting the value of individual channels to be equal to the calculated luminescence.

You can create a threshold filter in a similar manner. This time, we will allow the users to pass a threshold value. If the luminosity of a particular pixel is above the user provided limit, that pixel will turn white. If the luminosity of a particular pixel is less than the user provided limit, that pixel will turn black.

As an exercise, you should try and create your own pixel-based filters which, for example, increase the value for a particular channel on all pixels.

Instead of manipulating the color of the current pixel, CamanJS also allows you to set the color for pixels at absolute and relative locations. Unfortunately, this behavior is a little buggy, so we will have to rewrite some methods. If you look at the source code of the library, you will notice that methods like getPixel() and putPixel() call the methods coordinatesToLocation() and locationToCoordinates() on this. However, these methods are not defined on the prototype but on the class itself.

Another issue with the library is that the putPixelRelative() method uses the variable name nowLoc instead of newLoc in two different places. You can get rid of both these issues by adding the following code inside your script.

This filter randomly sets the value of pixels two rows up and two columns to the right of the current pixel to white. This erases parts of the image. Hence the name of the filter.

Creating New Kernel Manipulation Based Filters

As I mentioned earlier, CamanJS allows you to create custom filters where the color of the current pixel is determined by the pixels surrounding it. Basically, these filters go over each pixel in the image that you are editing. A pixel in the image will be surrounded by eight other pixels. The values of these nine pixels from the image are multiplied by the corresponding entries of the convolution matrix. All these products are then added together to get the final color value for the pixel. You can read about the process in more detail in the GIMP documentation.

Just like pixel-based filters, you can define your own kernel manipulation filters using Caman.Filter.register("filter_name", callback);. The only difference is that you will now call processKernel() inside the callback function.

Here is an example of creating an emboss filter using kernel manipulation.

The following CodePen demo will show all the filters that we created in this tutorial in action.

Final Thoughts

In this series, I have covered almost everything that CamanJS has to offer in terms of canvas-based image editing. You should now be able to use all the built-in filters, create new layers, apply blend modes on those layers, and define your own blend modes and filter functions.

You can also go through the guide on the CamanJS website in order to read about anything that I might have missed. I would also recommend that you read the source code of the library in order to learn more about image manipulation. This will also help you uncover any other bugs in the library.

HTML is almost intuitive. CSS is a great advancement that cleanly separates the structure of a page from its look and feel. JavaScript adds some pizazz. That's the theory. The real world is a little different.

In this tutorial, you'll learn how the content you see in the browser actually gets rendered and how to go about scraping it when necessary. In particular, you'll learn how to count Disqus comments. Our tools will be Python and awesome packages like requests, BeautifulSoup, and Selenium.

When Should You Use Web Scraping?

Web scraping is the practice of automatically fetching the content of web pages designed for interaction with human users, parsing them, and extracting some information (possibly navigating links to other pages). It is sometimes necessary if there is no other way to extract the necessary information. Ideally, the application provides a dedicated API for accessing its data programmatically. There are several reasons web scraping should be your last resort:

It might be slow and expansive (if you need to fetch and wade through a lot of noise).

Understanding Real-World Web Pages

Let's understand what we are up against, by looking at the output of some common web application code. In the article Introduction to Vagrant, there are some Disqus comments at the bottom of the page:

In order to scrape these comments, we need to find them on the page first.

View Page Source

Every browser since the dawn of time (the 1990s) has supported the ability to view the HTML of the current page. Here is a snippet from the view source of Introduction to Vagrant that starts with a huge chunk of minified and uglified JavaScript unrelated to the article itself. Here is a small portion of it:

Here is some actual HTML from the page:

This looks pretty messy, but what is surprising is that you will not find the Disqus comments in the source of the page.

The Mighty Inline Frame

It turns out that the page is a mashup, and the Disqus comments are embedded as an iframe (inline frame) element. You can find it out by right-clicking on the comments area, and you'll see that there is frame information and source there:

That makes sense. Embedding third-party content as an iframe is one of the primary reasons to use iframes. Let's find the <iframe> tag then in the main page source. Foiled again! There is no <iframe> tag in the main page source.

JavaScript-Generated Markup

The reason for this omission is that view page source shows you the content that was fetched from the server. But the final DOM (document object model) that gets rendered by the browser may be very different. JavaScript kicks in and can manipulate the DOM at will. The iframe can't be found, because it wasn't there when the page was retrieved from the server.

Static Scraping vs. Dynamic Scraping

Static scraping ignores JavaScript. It fetches web pages from the server without the help of a browser. You get exactly what you see in "view page source", and then you slice and dice it. If the content you're looking for is available, you need to go no further. However, if the content is something like the Disqus comments iframe, you need dynamic scraping.

Dynamic scraping uses an actual browser (or a headless browser) and lets JavaScript do its thing. Then, it queries the DOM to extract the content it's looking for. Sometimes you need to automate the browser by simulating a user to get the content you need.

Static Scraping With Requests and BeautifulSoup

Let's see how static scraping works using two awesome Python packages: requests for fetching web pages and BeautifulSoup for parsing HTML pages.

Installing Requests and BeautifulSoup

This will create a virtual environment for you too. If you're using the code from gitlab, you can just pipenv install.

Fetching Pages

Fetching a page with requests is a one liner: r = requests.get(url)

The response object has a lot of attributes. The most important ones are ok and content. If the request fails then r.ok will be False and r.content will contain the error. The content is a stream of bytes. It is usually better to decode it to utf-8 when dealing with text:

Once we have a BeautifulSoup object, we can start extracting information from the page. BeautifulSoup provides many find functions to locate elements inside the page and drill down deep nested elements.

Tuts+ author pages contain multiple tutorials. Here is my author page. On each page, there are up to 12 tutorials. If you have more than 12 tutorials then you can navigate to the next page. The HTML for each article is enclosed in an <article> tag. The following function finds all the article elements on the page, drills down to their links, and extracts the href attribute to get the URL of the tutorial:

Dynamic Scraping With Selenium

Static scraping was good enough to get the list of articles, but as we saw earlier, the Disqus comments are embedded as an iframe element by JavaScript. In order to harvest the comments, we will need to automate the browser and interact with the DOM interactively. One of the best tools for the job is Selenium.

Selenium is primarily geared towards automated testing of web applications, but it is great as a general-purpose browser automation tool.

Installing Selenium

Type this command to install Selenium: pipenv install selenium

Choose Your Web Driver

Selenium needs a web driver (the browser it automates). For web scraping, it usually doesn't matter which driver you choose. I prefer the Chrome driver. Follow the instructions in this Selenium guide.

Chrome vs. PhantomJS

In some cases you may prefer to use a headless browser, which means no UI is displayed. Theoretically, PhantomJS is just another web driver. But, in practice, people reported incompatibility issues where Selenium works properly with Chrome or Firefox and sometimes fails with PhantomJS. I prefer to remove this variable from the equation and use an actual browser web driver.

Counting Disqus Comments

Let's do some dynamic scraping and use Selenium to count Disqus comments on Tuts+ tutorials. Here are the necessary imports.

The get_comment_count() function accepts a Selenium driver and URL. It uses the get() method of the driver to fetch the URL. This is similar to requests.get(), but the difference is that the driver object manages a live representation of the DOM.

Then, it gets the title of the tutorial and locates the Disqus iframe using its parent id disqus_thread and then the iframe itself:

The next step is to fetch the contents of the iframe itself. Note that we wait for the comment-count element to be present because the comments are loaded dynamically and not necessarily available yet.

Conclusion

Web scraping is a useful practice when the information you need is accessible through a web application that doesn't provide an appropriate API. It takes some non-trivial work to extract data from modern web applications, but mature and well-designed tools like requests, BeautifulSoup, and Selenium make it worthwhile.

Additionally, don’t hesitate to see what we have available for sale and for study in the Envato Market, and don't hesitate to ask any questions and provide your valuable feedback using the feed below.

]]>2018-03-23T12:00:00+00:00//www.4elements.com/blog/read/challenge_create_a_to-do_list_in_react
https://www.4elements.com/blog/read/challenge_create_a_to-do_list_in_react#When:11:51:20ZAre you ready to test your knowledge of React? In this video from my course on Modern Web Apps With React and Redux, you'll be challenged to build a basic to-do list app in React. Specifically, you’ll need to pass data to a child component where it will be updated and sent back to the parent component.

If you're not sure how to do that, don't worry—you can skip ahead to the solution. I'll take you through the whole process in detail, to show you how it's done!

Challenge: Create a To-Do List in React

The Challenge

In this challenge, you're going to create a basic to-do list app in React.

We have two components already built: a TodoList component and a TodoItem component.

The TodoList component just has a list of to-dos in its state, and each one has a text property and a done property. Then we can render our list of to-do items in the TodoItem component.

Then in the TodoItem component, we just render a list item with an input box which has the value. If this to-do item is done then the value is not editable—it's read-only. Then we have a button which we can use to "Finish" or "Unfinish" a task.

Right now, it's rendering just fine over on the right side, but we can't actually change these things in the state of our TodoList component. So if I try to make changes in these input elements, nothing happens. If I click Finish, the text on the button doesn't change as it should. And I can still select text in the input fields and, if I could actually make changes, it would be editable

All of that needs to be wired up correctly. And that's your challenge! Here's the CodePen with all of the source code for you to work with.

Fork the pen and try to figure out your own solution! Or read on and I'll walk you through it below.

The Solution

Start by forking the original CodePen and giving it a different name, e.g. by adding "MY SOLUTION".

We have a couple of different things we need to do here. First, inside our TodoItem component, we need to be able to make changes to this data. That means we need to have an onClick handler for the button and an onChange handler for the input element.

But then we need a way to pass those changes back up to the parent TodoList component. So that means that TodoList needs to pass a function down to TodoItem, so that it can call that function.

Creating an updateTodo Function

So we'll start by adding an updateTodo function in TodoList with some dummy text for now, which we'll update later. And now that we've created updateTodo, we can use it within TodoItem.

So how might this work? Well, let's start with the button. We'll add an onClick handler, and we'll add this.handleClick.

So now we need to write our handleClick function. The handleClick function is going to make a change to the todo property that was passed down to TodoItem.

When this is clicked, we want to reverse whatever the value of done is. So if it's false, switch it to true, and if it's true, switch it to false. And then, of course, we want to pass that newly updated todo object back up through the updateTodo function.

We can get our newTodo by doing Object.assign, so that we don't mutate the data. And we will copy all the properties in our existing to-do, which is actually this.props.todo.

But then we want to make sure that we overwrite the done property, which should be the reverse or the negative of this.props.todo.done.

So that's our newTodo. And then we can do this.props.updateTodo, and we will pass it our newTodo.

OK, so that's handling the click. Now let's go down and write our updateTodo now, so that we can actually see this in action. Right now, if I click Finish, you can see that our new to-do object is being printed out where done is set to true, but we're not seeing that reflected in the UI yet. That's because right now, we need to save this new todo back into our todos state.

Setting the State

So let's go ahead and write an updateTodo function, and it's going to take that newTodo as its parameter. And inside it, we're going to do this.setState.

Now, I'm going to set the state in a way that you may not have seen before. We're going to pass a function to set the state instead of passing an object. This is actually considered a good practice in React and may possibly be the only way to set state in the future. The function that you pass to setState will receive the current state as a parameter. So we can receive that state as a parameter to this function, and then we return our new state from this function.

This is actually a more reliable way of creating a new state that is based on the old state. You can almost think of it as a kind of reducer function, within our setState call.

So we're going to go ahead and return a new object here. And since that's all we're going to do from this function, we can actually just wrap the curly braces in parentheses so that it knows that this is an object and not the function block.

Let's get our existing state.todos, and we will map over each todo there, and if the todo.id is equal to the newTodo.id, then we know that this is the todo object that we have to replace. So we can replace it with the newTodo, and otherwise we can just return the old todo, because it's not the one that we're looking to replace.

And then we just need to change our updateTodo function to refer to this.updateTodo.

Now, if you click Finish, you'll see that the button changes to Unfinish, and that's because todo.done is now true instead of false. Also, the "wash floor" text is now a little bit grayed out, and it's no longer editable. If I click Unfinish, it switches back to Finish, and the text box is editable again.

Making the Text Editable

Our next step is to make the text editable by adding an onChange handler.

On the input line, we'll add onChange={this.handleChange}. And then we need to write handleChange.

We'll start by creating a newTodo and copying all the properties from this.props.todo, and then handleChange will pass our event object. We're going to set the text to be e.target.value. And then underneath that we'll do this.props.updateTodo, and we'll pass it the newTodo.

So now, if we change the text, it works just fine. We can now say buy eggs instead of milk, and we can say wash the car instead of the floor. So now, we are successfully making changes to an object in a child component and passing those changes back up to the parent component, where they can be stored.

Simplifying the Code

So it now works as we wanted it to, but there's still one more thing I want to do. You'll notice that the handleChange function and the handleClick function have a lot of similar code. I've often had child components like this where we want to update an object in some way and then pass it back to a parent, and you'll find that a common pattern for doing that is using Object.assign in this way.

So what we're going to do to tidy things up is create a new function called sendDelta. In this case, delta is just the term for whatever needs to change between this to-do and the new to-do that we need. So what we can do here is pass our delta, or our object for just the properties that need to change, to sendDelta.

Then we just copy the code from handleClick and paste it in sendDelta. The delta is basically the last argument that we've passed to Object.assign, so we can go ahead and replace the code highlighted below with delta, and then just send that.

So now in handleClick and handleChange, we can simply replace most of the code with this.sendDelta, as shown below. As you can see, it's a lot more concise.

So that's the solution! For the full source code, you can refer to the solution CodePen shown below:

Of course, you may have come up with a different solution. If so, that's great. In any case, we've now successfully created a child component that can change its data and then send those changes back up for storing to its parent component.

Watch the Full Course

In the full course, Modern Web Apps With React and Redux, you'll learn how to use React and Redux to build a complete web application. You'll start with the simplest possible architecture and slowly build up the app, feature by feature. By the end, you'll have created a complete flashcards app for learning by spaced repetition, and you'll also have learned a whole lot about React and Redux, as well as sharpening your ES6 (ECMAScript 2015) skills.

You can take this course straight away with a subscription to Envato Elements. For a single low monthly fee, you get access not only to this course, but also to our growing library of over 1,000 video courses and industry-leading eBooks on Envato Tuts+.

Plus you now get unlimited downloads from the huge Envato Elements library of 460,000+ creative assets. Create with unique fonts, photos, graphics and templates, and deliver better projects faster.

]]>2018-03-20T11:51:20+00:00//www.4elements.com/blog/read/deploy_php_web_applications_using_laravel_forge
https://www.4elements.com/blog/read/deploy_php_web_applications_using_laravel_forge#When:12:00:00ZDevelopers love to automate things—for every process between development and production, they are keen to have a script that makes their workflow easier. This is also the case with deployment.

The process of pushing the final build and deploying the app should be as easy as pressing a Deploy now button, but that is not what happens most of the time. We end up investing our time and resources in configuring the server, setting up the environment, moving files that we thought were not relevant for production builds, and so on.

Some of us prefer to send files to the server manually using FTP or have the code pushed into a GitHub repo, whereas others prefer a deployment tool to make the process easier. One such tool that makes PHP deployment a breeze is Laravel Forge.

Don't let the Laravel brand name mislead you. Apart from Laravel, you can use the service to host WordPress, Symphony, Statamic, or any other web project as long as it's PHP. Personally, I like Laravel Forge for its simplicity and ease of getting used to.

In this tutorial, I am going to take you through the steps to hook Laravel Forge with AWS and explore what it has to offer.

Overview

Laravel Forge lets you spin up cloud servers and handle deployment processes using Git and some of the popular server providers available. The process is explained below:

First, you will need to connect AWS or any other cloud provider to your Forge account. Next, link your source control such as GitHub to Forge. You will now be able to create servers. Install your source control repository on the server. Finally, press the deploy button. Easy enough, right?

Servers provisioned with Laravel Forge come shipped with the following stack:

Ubuntu 16.06

Nginx

PHP 7.2/7.1/7.0/5.6

MySQL/MariaDB/Postgres

Redis

Memcached

Once the server has been created, you can further configure things.

When you sign up, you can choose between the different plans that they offer. I opted for the $12/month basic plan; however, you will get a free trial with access to everything on the list for five days.

Once you've logged in, you will see something like this below.

You can choose between Digital Ocean, AWS, Linode, and Vultr for the service provider. Alternatively, you can use Forge with a custom VPC too. As for the source control, Forge supports GitHub, GitLab, and Bitbucket. In this tutorial, I am going to discuss the basics of configuring AWS to work with Forge and GitHub for source control. Once you are done, you will be able to create and provision any number of servers.

If you're using another service provider on the list, you can skip this step and catch up with us later, after we've configured AWS and Laravel Forge.

Setting Up Laravel Forge and AWS

To set up Forge and AWS, here are the steps that you need to follow.

1. Log in to Laravel Forge

Log in to Laravel Forge and choose AWS as the service provider. You'll be asked for an Access Key ID (key) and a Secret Access Key (secret). You will need to create a specific IAM user with a policy that provides sufficient access to Laravel Forge. IAM is Amazon's way of mapping permissions on each user so that you can revoke access if anything goes wrong.

2. Create a New IAM User

Sign in to AWS Console and create a new IAM user.

Give the user a meaningful name and check the box that says Programmatic Access.

3. Choose the Right Policy

Set the right permission for the laravel-forge IAM user. Create a new user group because user groups are ideal for managing permissions. Now the natural question is, "What policies should the forge user have access to?" Although you could provide it with AdministratorAccess, you shouldn't.

If you need Forge to create and provision servers on your behalf, you will need to add two policies:

AmazonEC2FullAccess

AmazonVPCFullAccess

4. Save the Credentials and Confirm

Confirm the IAM account and, on the next page, you'll find the Access Key and the Secret Code.

Head over to the Laravel Forge page and paste them there. That's it.

5. Link Your GitHub Account to Forge

Connect your GitHub/Bitbucket account to Forge if you haven't done that already. Forge will add a public key to your account when you create a server. If you need to add a new service provider and/or update the source control, you have those options inside your profile.

Creating a New Server

Choose t2.micro with 1GB RAM if you're on AWS free tier. As for the other settings, I am going to go with the defaults. This includes MySQL for the database and PHP version 7.2. You can customize the database name later on. To keep things simple, I've decided not to use a load balancer. If you're wondering about the post-production recipe, I have covered that towards the end of this tutorial.

It might take up to five minutes for the server to be created. You will be given the credentials for the sudo access. Store them in a secure place so that you can use them in the future. To see that things are working as expected, go to the server's IP address and you should see the output of phpinfo() on your screen.

Server Management Interface

The interface that you see after creating a server is the server management dashboard.

You can do a whole lot of things here, such as:

site management

adding SSH keys

database configuration

updating PHP settings

scheduling a task

starting a daemon

managing network and configure firewall

monitoring application using Blackfire or Papertail

configuring meta settings

That's a lot of features bundled in there. I've covered the important ones in this tutorial. Let's start with the site management. As per the Forge docs:

Sites represent each "domain" on your server. The "default" site is included with each freshly provisioned server; however, you should delete it and create a new site with a valid domain name when you are ready to launch your production site.

As you can see, Forge has already set up a default site for us. You can create any number of sites and route them to your subdomains. For the purpose of this tutorial, I will stick to the default site. The web directory is set to /public by default. This is how it should be configured for Laravel and most other web applications.

If you click on a specific site, you will see the site management interface. You can manage, deploy, and configure individual sites from here.

Site Management Interface

Here is what the interface initially looks like.

You can either install from a Git repository or install WordPress. For the purpose of this tutorial, I've created a sample Contact us application that you can fork into your account. You can specify the name of the project and the branch. Once you're done, you should have the controls for deploying your application.

I will give you a quick tour of the options available.

Deploy Now and Quick Deploy

To deploy, you can manually deploy using the Deploy now button. Alternatively, you can enable the Quick Deploy option, which automatically deploys the project when you push code into the master branch of the chosen GitHub repo.

Deployment Script

The default deploy script pulls code from the repository, installs dependencies, starts the server, and runs migrations every time the app is deployed. Here's the actual deployment script.

Deployment Trigger URL

You can use this to integrate your app into a third-party service or create a custom deployment script. When the URL receives a request, the deployment script is triggered.

Update the Repo and the Branch

If you need to update the branch or install a newer version of the same project on a different repository, you can use these options. If you are updating the branch, you might have to update the branch name in the deployment script too.

Environment

Forge automatically generates an environment file for the application. Some of the details such as database credentials are automatically added to the environment. However, if the app uses an API, you can place the API key safely in the environment. Even if you're running a generic PHP web app, you can access the ENV variables using the getenv() method.

Queue Worker

Starting a queue worker in Forge is the same as running the queue:work Artisan command. Forge manages queue workers using a process monitor called Supervisor so that the process keeps on running permanently. You can create multiple queues based on queue priority or any other classification that you find useful.

SSL

Securing SSL for a website was anything but easy and free in the past. Forge lets you install an existing certificate or you can obtain a free certificate from LetsEncrypt. It's fast and easy. If you need SSL for wildcard subdomains, you can add the free Cloudflare certificates to Forge.

Back to the Server Management interface, we have SSH keys.

Adding SSH Keys

Although most of the configurable options are available on the dashboard, if you need to connect to the server, you should do that using SSH. SSH is the more secure way of logging into a VPS and provides more protection than passwords.

To access the server via SSH, you will need to generate a key pair if you haven't already. The public key will be made accessible to the server, and the private key will reside in your host. You can then use the setup to connect to the server instance.

Note: The SSH key added from the server management dashboard will be specific to that server. If you need to automatically add keys to all the servers from here on, you can add them from your Profile settings.

To generate a key pair, run the following command.

ssh-keygen -t rsa

You will be asked a couple of questions such as the file where you would like to store the key and the passphrase for additional security. Next, add the SSH key to the ssh-agent.

ssh-add ~/.ssh/id_rsa

Copy the public key and add it to Forge's list of SSH keys.

cat ~/.ssh/id_rsa.pub # Copy the output of this command

Configuring PHP and MySQL

You can use the interface to configure PHP and MySQL. For the database, the available options include:

Create new databases.

Add new users.

Update users' access to a database.

Update Forge's knowledge about the password.

Make sure that you fill in the updated data in your .env file.

You can configure the following PHP settings:

Upgrade to the latest version of PHP.

Change the upload file size.

Optimize OPCache for production so that the compiled PHP code will be stored in memory.

Other Important Settings

Here I've listed some of the other settings available.

Scheduling a Task

You can use Forge's scheduler to schedule recurring tasks or run cron jobs. If you need to send out email periodically, clean up something, or run a script, you can use the task scheduler. A task is created by default that runs composer self-update on a nightly basis. You can try scheduling a new one with a frequency of your choice.

Starting a Daemon

A daemon is a computer program that runs in a background process. Laravel Forge lets you start a daemon and uses Supervisor to ensure that the daemon stays running. If the daemon crashes for some reason, Supervisor will restart the script automatically.

Monitoring the Application

Laravel Forge has built-in support for tools that monitor your application for performance measures by gathering data about the resources such as memory, CPU time, and I/O operations. The tools available are Blackfire.io and Papertrail. To start profiling your application, you just need to retrieve the right credentials from the third-party website and that's it.

Configuring the Server Network and Firewall

If you need to update the firewall settings, you don't have to go to the AWS console to make that happen. You can create new firewall rules from the dashboard. If you have other servers provisioned using the same provider and region, you can set up a server network so that they can communicate painlessly.

Summary

Laravel Forge is an incredible tool that makes deployment a piece of cake. It has tons of features and an easy-to-use UI that lets you create and provision servers and deploy applications without any hassle. Once you've configured the service provider, chances are high that you won't need to access the AWS console for managing the server again.

In this tutorial, I've covered the basics for configuring AWS with Laravel Forge and the steps for provisioning a server and deploying an application. I've also discussed almost all the features available in the Forge interface.

For those of you who are either just getting started with Laravel or looking to expand your knowledge, site, or application with extensions, we have a variety of things you can study in Envato Market.

Do you have any experience to share with deploying PHP applications using Laravel Forge or any other popular deployment tool? Let us know in the comments.

]]>2018-03-19T12:00:00+00:00//www.4elements.com/blog/read/getting_started_with_the_mojs_animation_library_the_burst_module
https://www.4elements.com/blog/read/getting_started_with_the_mojs_animation_library_the_burst_module#When:12:00:00ZWe started this series by learning how to animate HTML elements using mojs. In the second tutorial, we moved on to animation of built-in SVG shapes using the Shape module. The third tutorial covered more ways of animating SVG shapes using the ShapeSwirl and stagger modules.

Now, we will learn how to animate different SVG shapes in a burst formation using the Burst module. This tutorial will depend on concepts we covered in the previous three tutorials. If you have not already read them, I would suggest that you go through them first.

Creating Basic Burst Animations

The first thing that we need to do before we can create any burst animations is instantiate a Burst object. After that, we can just specify the values of different properties to control how the animation plays out. The names of a lot of properties in the Burst module are the same as the properties in the Shape module. However, these properties perform very different tasks in this case.

The left and right properties determine the initial position of the burst instead of particles inside it. Similarly, the x and y properties determine the shift of the whole burst instead of individual particles.

The radius of the circle formed by all the burst particles is controlled by the radius property. This is very different from the radius property of individual shapes, which determines the size of those shapes. In the case of a burst, the radius determines how much further apart the individual shapes in it are going to be.

The number of shapes or particles in a single burst can be specified using the count property. By default, there will be five particles in each burst that you create. All these particles are evenly spaced over the circumference of the burst. For example, if there are four particles, they will be placed at 90 degrees to each other. If there are three particles, they will be placed at 120 degrees.

If you don't want the burst particles to cover the whole 360 degrees, you can specify the portion that should be covered using the degree property. Any value above 0 is valid for this property. The specified number of degrees will be evenly distributed between all the particles. If the degree value is over 360, the shapes might overlap.

The angle specified using the angle property determines the angle of the whole burst. In this case, individual particles are not rotated around their own center but around the center of the burst. This is similar to how the earth revolves around the sun, which is different from the rotation of the earth on its own axis.

The scale property scales the value of all physical properties of the burst and in turn individual shapes. Just like other burst properties, all shapes in it would be scaled at once. Setting the burst scale to 3 will increase the radius of the whole burst as well as the size of individual shapes by 3.

In the following code snippet, we are creating five different bursts using the properties we just discussed.

You can see that burstA and burstE only differ in the number of degrees that they have to cover. Since the particles in burstA have to cover 360 degrees (the default value), they are placed 360/20 = 18 degrees apart. On the other hand, the particles in burstE are placed 3600/20 = 180 degrees apart. Starting from zero, the first particle is placed at 0 degrees, and the next is placed at 180 degrees.

The third particle is then placed at 360 degrees, which is basically equal to 0 degrees. The fourth particle is then placed at 540 degrees, but that is basically equal to 180 degrees. In other words, all the odd numbered particles are placed at 0 degrees, and all the even number particles are placed at 180 degrees. In the end, you only see two particles because all others overlap with the first two.

It is important to remember that you cannot directly control the duration, delay or easing function of the burst animations. The module determines all these values automatically based on the values of different children being animated.

Manipulating Individual Burst Particles

So far in this tutorial, all the particles in a burst had the same animation applied to them. Their angle, scale, radius, and position all changed by the same value. Moreover, we were not able to control the duration and delay of either the individual particles or the burst as a whole. The mojs Burst module does not have a set of properties which can directly change all these values. However, we can specify the animation value for individual particles, which in turn affects the burst animation.

All the particles in a burst animation are considered to be children of the original Burst object. Therefore, mojs allows us to control the animation of individual burst particles using a children property, which accepts an object as its value. You can use all the ShapeSwirl properties except x and y inside the children object. This makes sense because the individual particles in a burst animation have to appear at certain positions, and allowing us to randomly change the position of individual particles will change the configuration.

Any children property values that you don't specify will be set to the default provided by the ShapeSwirl module. In the following example, we are animating 20 different lines of our burst animation. This time, the angle property has been set on individual particles instead of the Burst object so that only the lines rotate around their center instead of the whole object. As we learned in the previous tutorial, all the ShapeSwirl objects scale down from 1 to 0 by default. That's why the lengths of the lines change from 40 to 0 in the animation.

As I mentioned earlier, we can animate all the ShapeSwirl properties inside the burst animations. Each child in the animation can have its own set of properties. If only one value is provided, it will be applied on all the child particles. If the values are provided as an array, they will be applied sequentially, one particle at a time.

Here is the JavaScript code to create five different burst animations using all the concepts we have learned so far.

In the first burst animation, the angle applied directly on the Burst object rotates the whole group around the center of the burst object. However, the angle applied inside the children property rotates all the triangles around their own centers. We also slowed down the burst animation by changing the animation duration for all the children to 4000ms.

In the second burst animation, the color of all the triangles is taken from the array passed to the fill property. We have specified only three fill colors, but the total number of triangles is 20. In such cases, mojs keeps cycling through the array elements and fills the triangles with the same three colors again and again.

In the fourth animation, we use rand strings, which we learned about in the previous tutorial, to randomly choose a scale value for all the child particles. We also set the value of isShowEnd property to false in order to hide the particles at the end of the animation.

In the fifth animation, we use the then() method from the Shape module tutorial to play another animation sequence after the first one finishes.

Final Thoughts

The aim of this series was to get you acquainted with the basics of the mojs animation library. Each tutorial focused on a single module and how you can use the properties in that module to create basic animations.

This last tutorial used the concepts from the previous tutorials to create slightly more complicated animations. Mojs is a very powerful animation library, and the final results you get depend on how creative you can get with all the properties, so keep experimenting.

If there is anything that you would like me to clarify in this tutorial, please let me know in the comments.

]]>2018-03-16T12:00:00+00:00//www.4elements.com/blog/read/jwt_authentication_in_django
https://www.4elements.com/blog/read/jwt_authentication_in_django#When:12:00:00ZThis tutorial will give an introduction to JSON Web Tokens (JWT) and how to implement JWT authentication in Django.

What Is JWT?

JWT is an encoded JSON string that is passed in headers to authenticate requests. It is usually obtained by hashing JSON data with a secret key. This means that the server doesn't need to query the database every time to retrieve the user associated with a given token.

How JSON Web Tokens Work

When a user successfully logs in using their credentials, a JSON Web Token is obtained and saved in local storage. Whenever the user wants to access a protected URL, the token is sent in the header of the request. The server then checks for a valid JWT in the Authorization header, and if found, the user will be allowed access.

A typical content header will look like this:

Authorization:
Bearer eyJhbGciOiJIUzI1NiIsI

Below is a diagram showing this process:

The Concept of Authentication and Authorization

Authentication is the process of identifying a logged-in user, while authorization is the process of identifying if a certain user has the right to access a web resource.

API Example

In this tutorial, we are going to build a simple user authentication system in Django using JWT as the authentication mechanism.

Requirements

Django

Python

Let's get started.

Create a directory where you will keep your project and also a virtual environment to install the project dependencies.

Creating Models

Django comes with a built-in authentication system which is very elaborate, but sometimes we need to make adjustments, and thus we need to create a custom user authentication system. Our user model will be inheriting from the AbstractBaseUser class provided by django.contrib.auth.models.

In users/models.py, we start by creating the User model to store the user details.

Migrations

Migrations provide a way of updating your database schema every time your models change, without losing data.

Create an initial migration for our users model, and sync the database for the first time.

python manage.py make migrations users
python manage.py migrate

Creating a Superuser

Create a superuser by running the following command:

python manage.py createsuperuser

Creating New Users

Let's create an endpoint to enable registration of new users. We will start by serializing the User model fields. Serializers provide a way of changing data to a form that is easier to understand, like JSON or XML. Deserialization does the opposite, which is converting data to a form that can be saved to the database.

Now that we are done creating the endpoint, let's do a test and see if we are on track. We will use Postman to do the tests. If you are not familiar with Postman, it's a tool which presents a friendly GUI for constructing requests and reading responses.

As you can see above, the endpoint is working as expected.

Authenticating Users

We will make use of the Django-REST Framework JWT Python module we installed at the beginning of this tutorial. It adds JWT authentication support for Django Rest Framework apps.

But first, let's define some configuration parameters for our tokens and how they are generated in the settings.py file.

In the code above, the login view takes username and password as input, and it then creates a token with the user information corresponding to the passed credentials as payload and returns it to the browser. Other user details such as name are also returned to the browser together with the token. This token will be used to authenticate in future requests.

The permission classes are set to allowAny since anyone can access this endpoint.

Every time the user wants to make an API request, they have to send the token in Auth Headers in order to authenticate the request.

Let's test this endpoint with Postman. Open Postman and use the request to authenticate with one of the users you created previously. If the login attempt is successful, the response will look like this:

Retrieving and Updating Users

So far, users can register and authenticate themselves. However, they also need a way to retrieve and update their information. Let's implement this.

In order for the request to be successful, the headers should contain the JWT token as shown below.

If you attempt to request a resource without the authentication header, you will get the following error.

If a user stays beyond the time specified in JWT_EXPIRATION_DELTA without making a request, the token will expire and they will have to request another token. This is also demonstrated below.

Conclusion

This tutorial has covered what is necessary to successfully build a solid back-end authentication system with JSON Web Tokens.

]]>2018-03-14T12:00:00+00:00//www.4elements.com/blog/read/10_things_men_can_do_to_support_women_in_tech
https://www.4elements.com/blog/read/10_things_men_can_do_to_support_women_in_tech#When:07:52:13ZWhile there is no shortage of books, seminars, articles, etc. created to help women succeed in male-dominated workplaces, there is precious little information designed to help men modify their attitude and behaviour in order to promote gender equality at work.

This is a problem because, let’s face it, women will never achieve equity in the workplace or in life if they’re the only gender working towards it. Men need to be part of the solution. It's a solution worth fighting for when you consider that, according to a study by the National Center for Women & Information Technology, gender-balanced companies demonstrate superior team dynamics and productivity, perform better financially, and produce teams that stay on schedule and under budget. A win-win scenario for everyone!

So how can men be part of the solution? How can they be allies to women in male-dominated industries like tech, where female numbers are small and where they’re bound to feel outnumbered and isolated? In honour of International Women’s Day, with its theme of #PressforProgress, I offer you 10 things men can do to support women in the tech workplace.

1. Understand What Privilege Is and Accept That You Have It

Before men can think of being allies to women in our quest for gender parity, you first have to acknowledge your privileged position in the workplace and in society as a whole.

Now I know this is a hard one for many people to understand, let alone accept. So let me first define what I mean by privilege in this context. A privilege is “a special right, advantage, or immunity granted or available only to a particular person or group and not to others.” In relation to gender, being born male—particularly white, heterosexual, and Christian in an industrial nation—confers an enormous amount of privilege in comparison to men and women of other ethnicities, sexual orientations, religions, etc. But this doesn’t mean you haven’t worked hard to achieve what you have, it doesn’t invalidate the challenges you’ve personally had to overcome in your life, nor does it make you guilty of anything!

What it does mean is that as a result of a twisted structure of injustice that is no fault of yours, before you even start to work towards your goals, you are already ahead of the other players in the game. It means that people who weren’t born with your unique and totally accidental combination of characteristics have to overcome varying obstacles and sometimes seemingly insurmountable odds just to get to your starting point. It means, in other words, that some people have to work much harder for the same opportunities you’re automatically afforded.

It also means taking responsibility for figuring out how the configuration of the playing field affects all players. And more than anything, as someone who benefits from privilege, it means having compassion for others with less privilege and taking active steps to help and support them when you can.

If you don’t quite get what I’m saying, here are two videos that quite effectively demonstrate how privilege works:

If you'd like to discover your degree of privilege, you can try this quiz.

2. Recognise That Gender Inequality Has More Than One Face

Though most women face some form of gender inequality at work, not all women face it in the same way. That is because the discrimination a woman faces in the workplace may overlap with her other identities: namely race, class, ethnicity, religion, sexual orientation, etc.

If feminism champions women's rights and promotes equity between the sexes, intersectional feminism questions how these other identities mentioned above affect the way women experience discrimination.

A white woman, for example, may be discriminated against for her gender but has the advantage of race on her side. However, a woman of African ancestry or an Asian lesbian may be discriminated against because of their gender, their ethnicity, and/or their sexual orientation.

Supporting women in the tech workplace means understanding that sexism is sometimes intersectional and requires different levels of awareness and action in order to address it effectively.

3. Hire, Pay, Promote

Many of the problems women face in tech begin with hiring practices. If you’re in a position to hire for your company, you may expand your scope for finding suitable female candidates by attending college job fairs at all-female colleges and reaching out to professional organisations that are geared towards women. In addition, to increase the likelihood of reaching a greater diversity of women, you may seek out professional organisations that cater to ethnic minorities.

Even if you’re not in a position to hire for your company, you may still influence your company's hiring practices by raising awareness of the need to employ a greater diversity of talent. You might do this by researching relevant talent pools yourself and suggesting that your hiring department consider them as viable options.

4. Advocate for Fair Workplace Policies

Getting a diversity of women in the door is a good start for any tech company, but to keep them there, male allies are needed to champion fair workplace policies.

One of the most important policies supports fair and equitable compensation by establishing transparent salary guidelines based on criteria like education and skill level, performance level, and going market rates. Other policies benefit everyone—like flexible hours, working from home, generous maternity and paternity leave programmes, and on-site child care. They are also the type of practices that can be of special benefit to women, who are often the primary caregivers in the family.

5. Take Parental Leave

When women of child-bearing age join the workforce, one of the great difficulties they face is how to balance motherhood with their professional aspirations and their company’s expectations. In fact, companies often hesitate to hire women because of the assumption that they will leave after they have a baby.

In the ideal world we’re working towards, for a two-parent household, parental leave isn’t just a woman’s issue, but an issue shared by the couple.

In addition to advocating for fair workplace policies, if more men spoke up and insisted on taking parental leave to share the obligations of parenthood equitably, this would lessen the stigma of taking time off for both women and men. It would help eradicate the motherhood bias against hiring women, since both women and men would be seen as potentially liable to take time off from work to care for their children.

6. Offer to Mentor or Sponsor

Mentors and sponsors are critical for career advancement—and in the tech world, men are 50% more likely than women to have a mentor or sponsor who can help guide their career path and support them in seeking out new opportunities.

Elizabeth Borges is a senior manager for a 12-month leadership and networking programme called EverwiseWomen. She suggests that men working in tech can make a huge difference by seeking out a high-potential junior woman to mentor or sponsor.

To become a mentor, she says to:

Set up time with a junior colleague to provide feedback about what’s she doing well and where she could improve. Ask her what work challenges she’s facing and help coach her through them.

To become a sponsor, male allies can:

Identify a woman who is doing amazing work, who could benefit from more visibility with senior leaders. Devise a stretch project that you could assign her or work with her on to help her gain that visibility, and help her expand her own perception of what she can do.

7. Be Mindful of Harassment

If there’s anything we’ve learnt from the #MeToo and Time's Up movements that took the globe by storm in 2017, it’s that sexual harassment and assault is rampant in the workplace. Women have had enough and are no longer prepared to suffer in silence.

If men of good conscience in the tech industry are to be supportive allies to women, it’s not enough to avoid obvious harassment like sexual innuendo, sexist jokes, or commenting on a woman’s appearance. Men must also have the courage to highlight and report situations in which harassment occurs. Silence in the face of gendered harassment or abuse communicates complicity with the perpetrator and isolates and demoralises the victim.

Male allies have to insist on a policy of zero tolerance for harassment in the workplace and codify this position with procedures and training about the law, as well as suggested strategies for witnesses and victims about how to react.

8. Give Women Credit for Their Ideas

Another term that gained currency last year is “hepeating”, popularised by astronomer and physics professor Nicole Gugliucci. She tweeted about hepeating after friends of hers coined the term to describe the scenario where women share their idea at work and are met with silence and indifference. Then, immediately after, the very same idea is put forth by a man who claims it as his own, and everyone approves enthusiastically.

This annoying and frustrating practice struck a chord with so many people that Gugliucci’s tweet received 200,000 likes and 65,000 retweets.

According to the Washington Post, women have recently come up with a strategy called “amplification” to stop this from happening. Amplification takes place when other women in the room listen to and repeat the key points made by a female colleague during a meeting and give credit to the woman who came up with the idea, forcing others in the room to remember the contribution and who made it.

This is something that male allies to women in tech can absolutely practice themselves, especially considering that it is unlikely that women in tech may have any other female allies in the room.

9. Don’t Interrupt

Apart from having their ideas stolen by male colleagues, several studies show that when it comes to meetings, women are more likely than other men to be interrupted and talked over by male colleagues.

There’s not much to say here, other than don’t do it. It's rude, disrespectful, and unbecoming of an ally!

10. Speak Up

One of the greatest challenges women encounter in the face of sexism in the workplace is silence from men of good conscience. Often, men are silent because they don’t know how to respond. But men need to be aware that when they are silent, their silence is interpreted as support for the bad behaviour of others.

Being an ally to women means noticing all the forms of injustice that women are exposed to and taking action to hold the perpetrator accountable and to support the victim.

Conclusion

Women in the tech workplace don’t need male saviours, but they certainly could use supportive allies of conscience and integrity. These allies are willing to hold themselves and their colleagues accountable and create a healthier and more equitable work environment for all—simply because it’s the right thing to do.

]]>2018-03-08T07:52:13+00:00//www.4elements.com/blog/read/a_gentle_introduction_to_higher-order_components_in_react_best_practices
https://www.4elements.com/blog/read/a_gentle_introduction_to_higher-order_components_in_react_best_practices#When:12:00:49ZThis is the third part of the series on Higher-Order Components. In the first tutorial, we started from ground zero. We learned the basics of ES6 syntax, higher-order functions, and higher-order components.

The higher-order component pattern is useful for creating abstract components—you can use them to share data (state and behavior) with your existing components. In the second part of the series, I demonstrated practical examples of code using this pattern. This includes protected routes, creating a configurable generic container, attaching a loading indicator to a component, etc.

In this tutorial, we will have a look at some best practices and dos and don'ts that you should look into while writing HOCs.

Introduction

React previously had something called Mixins, which worked great with the React.createClass method. Mixins allowed developers to share code between components. However, they had some drawbacks, and the idea was dropped eventually. Mixins were not upgraded to support ES6 classes, and Dan Abramov even wrote an in-depth post on why Mixins are considered harmful.

Higher-order components emerged as an alternative to Mixins, and they supported ES6 classes. Moreover, HOCs don't have to do anything with the React API and are a generic pattern that works well with React. However, HOCs have flaws too. Although the downsides of higher-order components might not be evident in smaller projects, you could have multiple higher-order components chained to a single component, just like below.

You shouldn't let the chaining get to the point where you are asking yourself the question: "Where did that props come from?" This tutorial addresses some of the common issues with higher-order component patterns and the solutions to get them right.

The Problems With HOC

Some of the common problems concerned with HOCs have less to do with HOCs themselves, but rather your implementation of them.

As you already know, HOCs are great for code abstraction and creating reusable code. However, when you have multiple HOCs stacked up, and if something looks out of place or if some props are not showing up, it's painful to debug because the React DevTools give you a very limited clue about what might have gone wrong.

A Real-World HOC Problem

To understand the drawbacks of HOCs, I've created an example demo that nests some of the HOCs that we created in the previous tutorial. We have four higher-order functions wrapping that single ContactList component. If the code doesn't make sense or if you haven't followed my previous tutorial, here is a brief summary of how it works.

withRouter is a HOC that's part of the react-router package. It provides you access to the history object's properties and then passes them as a prop.

withAuth looks for an authentication prop and, if authentication is true, it renders the WrappedComponent. If authentication is false, it pushes '/login' to the history object.

withGenericContainer accepts an object as an input in addition to the WrappedComponent. The GenericContainer makes API calls and stores the result in the state and then sends the data to the wrapped component as props.

withLoader is a HOC that attaches a loading indicator. The indicator spins until the fetched data reaches the state.

Now you can see for yourself some of the common pitfalls of higher-order components. Let's discuss some of them in detail.

Basic Dos and Don'ts

Don't Forget to Spread the Props in Your HOC

Assume that we have an authenticated = { this.state.authenticated } prop at the top of the composition hierarchy. We know that this is an important prop and that this should make it all the way to the presentational component. However, imagine that an intermediate HOC, such as withGenericContainer, decided to ignore all its props.

This is a very common mistake that you should try to avoid while writing higher-order components. Someone who isn't acquainted with HOCs might find it hard to figure out why all the props are missing because it would be hard to isolate the problem. So, always remember to spread the props in your HOC.

Don't Pass Down Props That Have No Existence Beyond the Scope of the HOC

A HOC might introduce new props that the WrappedComponent might not have any use for. In such cases, it's a good practice to pass down props that are only relevant to the composed components.

A higher-order component can accept data in two ways: either as the function's argument or as the component's prop. For instance, authenticated = { this.state.authenticated } is an example of a prop, whereas in withGenericContainer(reqAPI)(ContactList), we are passing the data as arguments.

Because withGenericContainer is a function, you can pass in as few or as many arguments as you like. In the example above, a config object is used to specify a component's data dependency. However, the contract between an enhanced component and the wrapped component is strictly through props.

So I recommend filling in the static-time data dependencies via the function parameters and passing dynamic data as props. The authenticated props are dynamic because a user can be either authenticated or not depending on whether they are logged in or not, but we can be sure that the contents of the reqAPI object are not going to change dynamically.

Apart from the performance hitches, you will lose the state of the OriginalComponent and all of its children on each render. To solve this problem, move the HOC declaration outside the render method so that it is only created once, so that the render always returns the same EnhancedComponent.

Don't Mutate the Wrapped Component

Mutating the Wrapped Component inside a HOC makes it impossible to use the Wrapped Component outside the HOC. If your HOC returns a WrappedComponent, you can almost always be sure that you're doing it wrong. The example below demonstrates the difference between mutation and composition.

Moreover, if you mutate the WrappedComponent inside a HOC and then wrap the enhanced component using another HOC, the changes made by the first HOC will be overridden. To avoid such scenarios, you should stick to composing components rather than mutating them.

Namespace Generic Propnames

The importance of namespacing prop names is evident when you have multiple stacked up. A component might push a prop name into the WrappedComponent that's already been used by another higher-order component.

Both the withMouse and withCat are trying to push their own version of name prop. What if the EnhancedComponent too had to share some props with the same name?

<EnhancedComponent name="This is important" />

Wouldn't it be a source of confusion and misdirection for the end developer? The React Devtools don't report any name conflicts, and you will have to look into the HOC implementation details to understand what went wrong.

This can be solved by making HOC prop names scoped as a convention via the HOC that provides them. So you would have withCat_name and withMouse_name instead of a generic prop name.

Another interesting thing to note here is that ordering your properties is important in React. When you have the same property multiple times, resulting in a name conflict, the last declaration will always survive. In the above example, the Cat wins since it's placed after { ...this.props }.

If you would prefer to resolve the name conflict some other way, you can reorder the properties and spread this.props last. This way, you can set sensible defaults that suit your project.

Make Debugging Easier Using a Meaningful Display Name

The components created by a HOC show up in the React Devtools as normal components. It's hard to distinguish between the two. You can ease the debugging by providing a meaningful displayName for the higher-order component. Wouldn't it be sensible to have something like this on React Devtools?

So what is displayName? Each component has a displayName property that you can use for debugging purposes. The most popular technique is to wrap the display name of the WrappedComponent. If withCat is the HOC, and NameComponent is the WrappedComponent, then the displayName will be withCat(NameComponent).

An Alternative to Higher-Order Components

Although Mixins are gone, it would be misleading to say higher-order components are the only pattern out there that allow code sharing and abstraction. Another alternative pattern has emerged, and I've heard some say it's better than HOCs. It's beyond the scope of this tutorial to touch on the concept in depth, but I will introduce you to render props and some basic examples that demonstrate why they are useful.

As you can see, we've got rid of the higher-order functions. We have a regular component called Mouse. Instead of rendering a wrapped component in its render method, we are going to render this.props.children() and pass in the state as an argument. So we are giving Mouse a render prop, and the render prop decides what should be rendered.

In other words, the Mouse components accept a function as the value for the children props. When Mouse renders, it returns the state of the Mouse, and the render prop function can use it however it pleases.

There are a few things I like about this pattern:

From a readability perspective, it's more evident where a prop is coming from.

This pattern is dynamic and flexible. HOCs are composed at static-time. Although I've never found that to be a limitation, render props are dynamically composed and are more flexible.

Conclusion

Higher-order components are patterns that you can use to build robust, reusable components in React. If you're going to use HOCs, there are a few ground rules that you should follow. This is so that you don't regret the decision of using them later on. I've summarized most of the best practices in this tutorial.

HOCs are not the only patterns that are popular today. Towards the end of the tutorial, I've introduced you to another pattern called render props that is gaining ground among React developers.

I won't judge a pattern and say that this one is better than another. As React grows, and the ecosystem that surrounds it matures, more and more patterns will emerge. In my opinion, you should learn them all and stick with the one that suits your style and that you're comfortable with.

This also marks the end of the tutorial series on higher-order components. We've gone from ground zero to mastering an advanced technique called HOC. If I missed anything or if you have suggestions/thoughts, I would love to hear them. You can post them in the comments.

]]>2018-02-28T12:00:49+00:00//www.4elements.com/blog/read/challenge_build_a_react_component
https://www.4elements.com/blog/read/challenge_build_a_react_component#When:10:17:49ZThe best way to learn a new skill is by putting it into practice. So here's a challenge for you.

In this video from my course, Modern Web Apps With React and Redux, you'll be challenged to create a React Component for displaying a Twitter avatar. You can try solving it on your own (with a hint), or you can let me walk you through the solution.

Challenge: Build a React Component

The Challenge

In this challenge, you need to build a React component for displaying a Twitter avatar. As you can see from the CodePen below, it just takes props.handle and prints out a little URL in an image tag. Very simple.

What we need to do is write a profile component that uses a Twitter avatar component to show the image and the name. You can see the ReactDOM.render call for some hints.

If you'd like to try this challenge on your own, go ahead! Otherwise, read on as I walk you through the solution.

The Solution

Start by forking the pen so that you can build our own component, and then rename it by adding "MY SOLUTION".

In our ReactDOM call, we have a Profile component that we're calling, and we give it a name and a handle.

So this should be pretty straightforward. Let's go ahead and create a profile. I'm going to do this as a stateless component, just using a JavaScript function. If you want, you can actually use React.createClass, or the class syntax itself. Do whatever you like. But I like using stateless functions as much as possible.

This is going to take one parameter, which is our props object, but it's going to have name and handle properties. So let's go ahead and destructure that.

const Profile = ({ name, handle }) =>

Then let's return a div. And inside this div, let's return an h1 with the name for this specific account. And underneath this, we will have a TwitterAvatar, which requires a handle property. So we will pass it a handle, which will be equal to the handle we have.

There we go. It should be that simple. So save this in CodePen, and you can see that we get ReactJS and we get the Twitter avatar.

Let's go ahead and change the name to Tuts+ and the Twitter handle to tutsplus, and you can see that it updates.

So, as you can see, we can change this to different names and Twitter avatars, and we can see this in action. Good job! You have built a very basic React component. It's a good place to start in seeing how you can create components and use their properties, and also how you can pass those properties on to other components, to do some of the work for you.

Here's the final pen showing the solution in full:

Watch the Full Course

React is a JavaScript library for building user interfaces that has taken the web development world by storm, and Redux is a great way of managing application state. In the full course, Modern Web Apps With React and Redux, you'll learn all about how React, Redux and other leading modules fit together for a complete picture of app development.

It's a comprehensive, four-hour course with 35 video lessons, and I'll take you through the process of using these two libraries to build a complete web application from scratch. You'll start with the simplest possible architecture and slowly build up the app, feature by feature. By the end, you'll have created a complete flashcards app for learning by spaced repetition.

You can take this course straight away with a subscription to Envato Elements. For a single low monthly fee, you get access not only to this course, but also to our growing library of over 1,000 video courses and industry-leading eBooks on Envato Tuts+.

Plus you now get unlimited downloads from the huge Envato Elements library of 440,000+ creative assets. Create with unique fonts, photos, graphics and templates, and deliver better projects faster.

]]>2018-02-28T10:17:49+00:00//www.4elements.com/blog/read/eloquent_mutators_and_accessors_in_laravel
https://www.4elements.com/blog/read/eloquent_mutators_and_accessors_in_laravel#When:13:00:05ZIn this article, we'll go through mutators and accessors of the Eloquent ORM in the Laravel web framework. After the introduction, we'll go through a handful of examples to understand these concepts.

In Laravel, mutators and accessors allow you to alter data before it's saved to and fetched from a database. To be specific, the mutator allows you to alter data before it's saved to a database. On the other hand, the accessor allows you to alter data after it's fetched from a database.

In fact, the Laravel model is the central place where you can create mutator and accessor methods. And of course, it's nice to have all your modifications in a single place rather than scattered over different places.

Create Accessors and Mutators in a Model Class

As you're familiar with the basic concept of mutators and accessors now, we'll go ahead and develop a real-world example to demonstrate it.

I assume that you're aware of the Eloquent model in Laravel, and we'll use the Post model as a starting point of our example. If you haven't created the Post model yet, let's use the artisan command to create it.

As we've used the --migration option, it should also create an associated database migration. Just in case you are not aware, you can run the following command so that it actually creates a table in the database.

php artisan migrate

In order to run examples in this article, you need to create name and published_at columns in the post table. Anyway, we won't go into the details of the migration topic, as it's out of the scope of this article. So we'll get back to methods that we are interested in.

As we discussed earlier, the mutators are used to alter data before it's saved to a database. As you can see, the syntax of the mutator method is set{attribute-name}Attribute. Of course, you need to replace {attribute-name} with an actual attribute name.

The setNameAttribute method is called before the value of the name attribute is saved in the database. To keep things simple, we've just used the strtolower function that converts the post title to lowercase before it's saved to the database.

In this way, you could create mutator methods on all columns of your table. Next, let's go through the accessor method.

If mutators are used to alter data before it's saved to a database, the accessor method is used to alter data after it's fetched from a database. The syntax of the accessor method is the same as that of the mutator except that it begins with the get keyword instead of the set keyword.

Also, you need to create an associated route in the routes/web.php file to access it.

Route::get('mutator/index', 'MutatorController@index');

In the index method, we're creating a new post using the Post model. It should set the value of the name column to post title as we've used the strtolower function in the setNameAttribute mutator method.

Date Mutators

In addition to the mutator we discussed earlier, the Eloquent model provides a couple of special mutators that allow you to alter data. For example, the Eloquent model in Laravel comes with a special $dates property that allows you to automatically convert the desired columns to a Carbon date instance.

In the beginning of this article, we created the Post model, and the following code was part of that class.

Also, let's create an associated route in the routes/web.php file to access it.

Route::get('accessor/index', 'AccessorController@index');

In the index method, we've used the Post model to load an example post in the first place.

Next, we're inspecting the value of the name column, and it should start with an uppercase letter as we've already defined the accessor method getNameAttribute for that column.

Moving further, we've inspected the value of the published_at column, and that should be treated as a date. Due to that, Laravel converts it to a Carbon instance so that you can use all the utility methods provided by that library. In our case, we've used the getTimestamp method to convert the date into a timestamp.

And that brings us to the end of this article!

Conclusion

Today, we've explored the concepts of mutators and accessors of the Eloquent ORM in Laravel. It provides a nice way to alter data before it's saved to and fetched from a database.

For those of you who are either just getting started with Laravel or looking to expand your knowledge, site, or application with extensions, we have a variety of things you can study in Envato Market.

Don't hesitate to share your thoughts using the feed below!

]]>2018-02-27T13:00:05+00:00//www.4elements.com/blog/read/a_gentle_introduction_to_hoc_in_react_learn_by_example
https://www.4elements.com/blog/read/a_gentle_introduction_to_hoc_in_react_learn_by_example#When:12:00:54ZThis is the second part of the series on Higher-Order Components (HOCs). Today, I will cover different higher-order component patterns that are useful and implementable. With HOCs, you can abstract redundant code into a layer of higher order. However, like any other patterns out there, it will take some time to get used to HOCs. This tutorial will help you bridge that gap.

Prerequisite

I recommend that you follow the first part of the series if you haven't already. In the first part, we talked about HOC syntax basics and everything you need to get started with higher-order components.

In this tutorial, we will be building on top of the concepts that we've already covered in part one. I've created several sample HOCs which are practically useful, and you can incorporate these ideas into your project. Code snippets are provided in each section, and a working demo of all the practical HOCs discussed in this tutorial is provided at the end of the tutorial.

HOC as a Wrapper Component

If you recall, the final example in my previous tutorial demonstrated how a HOC wraps the InputComponent with other components and elements. This is useful for styling and for reusing logic wherever possible. For instance, you can use this technique to create a reusable loader indicator or an animated transition effect that should be triggered by certain events.

A Loading Indicator HOC

The first example is a loading indicator built using HOC. It checks whether a particular prop is empty, and the loading indicator is displayed until the data is fetched and returned.

This is also the first time that we've used the second parameter as input to the HOC. The second parameter, which I've named 'loadingProp', is used here to tell the HOC that it needs to check whether that particular prop is fetched and available. In the example, the isEmpty function checks whether the loadingProp is empty, and an indicator is displayed until the props are updated.

You have two options for passing down data to the HOC, either as a prop (which is the usual way) or as a parameter to the HOC.

Here is how I choose between the two. If the data doesn't have any scope beyond that of the HOC and if the data is static, then pass them as parameters. If the props are relevant to the HOC and also to the wrapped component, pass them as usual props. I've covered more about this in my third tutorial.

State Abstraction and Prop Manipulation

State abstraction means generalizing the state to a higher-order component. All the state management of the WrappedComponent will be handled by the higher-order component. The HOC adds new state, and then the state is passed down as props to the WrappedComponent.

A Higher-Order Generic Container

If you noticed, the loader example above had a component that made a GET request using the fetch API. After retrieving the data, it was stored in the state. Making an API request when a component mounts is a common scenario, and we could make a HOC that perfectly fits into this role.

It accepts a configuration object as an input that gives more information about the API URL, the method, and the name of the state key where the result is stored. The logic used in componentWillMount() demonstrates using a dynamic key name with this.setState.

A Higher-Order Form

Here is another example that uses the state abstraction to create a useful higher-order form component.