Andy Huggins Bloghttp://andrewhuggins.com/feed2017-11-01T05:04:43+00:00Andy Hugginshttp://andrewhuggins.com/post/yo-my-indexes-got-indexesHave you ever heard that you should add indexes to your MySQL tables?

Have you ignored that advice because you didn't believe it would be a serious improvement?

In a recent project, we had to add a lot of past data, like four years worth of data for basically an entire industry. I'm talking 22,500 CSV files worth of data.

I had already created the import scripts and ran them many times uploading new data, but now I needed to import this old data. I transferred the CSV files to the server, edited some scripts to turn those CSV files into jobs, and setup multiple workers do process the files.

I kicked off this "run" and left it. After a few days, the files had been worked down to 15,500 files, or around one third had been completed. And a few days was probably like 60 hours or so.

Clearly, this was taking a long time.

I talked with a co-worker and we did a quick code review to see if we could think of anything. We ended up noticing that a lot of the imports were using a Laravel Relationship on our models. So thinking about how those work, we realized that we had not set up indexes on the main keys we were using to relate the data.

We figured it would be worth a try and created new migrations for these "key" fields.

Deployed and ran the migrations and kept looking at the number of jobs left to process. As soon as the indexes were built, each refresh of the count of jobs in Sequel Pro, it looked like there was an issue.

15,500 jobs had dropped to 14,200 in a few minutes.

A few minutes later there were 13,000 (or less) left. And no new "failed jobs" in the failed_jobs table.

I looked at the `updated_at` fields on the records that should be being imported, and to my amazement, things were being updated.

The jobs were flying. Within about 30-45 minutes, the CSV file import was done. I could not believe it.

Another Example of Indexes

After the import completed, I poked around in the application to see if things slowed down at all with all the new data. And on one of the most important pieces of the app, I noticed that the load time had increased by 10x.

I then loaded in the Laravel Debugbar and looked at the queries that were happening on the page. I had 56 queries and it was taking 26-32 seconds.

But this is where the power of the debugbar comes in. If you look through the queries, it lists how long it takes for each query to execute. Out of the 56 queries for that page, most were pretty fast, 1.32ms, 12ms, 600ns (nanoseconds?), and then I came across one that was running for 24 seconds.

The debugbar said the page took 26-32 seconds (multiple loads the time varied) and this one query was taking 24 seconds. Well, I think I just found the next thing to optimize.

Reading the query, it was clear that another index was needed. I made the migration and ran it.

Repeating this process, I was able to get it down to about a half second of load time.

]]>2017-11-01T05:04:43+00:00Andy Hugginshttp://andrewhuggins.com/post/working-with-laravel-queuesAs I work on projects, there are often interesting decisions that I am faced with. In the most recent project that I have been working on, I decided to use a queue to solve a particular problem.

The particular problem was the need to allow a user to upload an import file, but not have to wait for the import process to complete before the user can leave the page.

Using a queue ended up seeming like a great choice because it allowed me to upload the file, add it to the queue, and then send a message to the user that the file had been successfully uploaded and is in the process of being imported.

I set up a database queue...which is not usually recommended for a queue, but in this case, because there is pretty much only one single user of the app, I determined it would be fine to use the database queue instead of setting up Redis or some other queue system.

I added the queue system to the project and deployed it, everything was going smooth.

Over the next week or so, I ended up having to adjust the import script. So I made my changes, tested it locally in the CLI and since everything looked right, I pushed it up and deployed my changes.

Client messages me the next day and says that "something is not working correctly." This seems odd to me since I had deployed my code and tested it. I opened the db and looked, and sure enough the change I had made did not seem to be working. I then opened up the import file in vim on the server and looked to see if the code was there. It was. This is odd.

I figured that something odd had happened, and had a script I could run to update the new records with the missing info. Ran the script, told the client something weird happened, but the code was there and we will have to see tomorrow what happens.

Side note, this client is awesome and we have a great working relationship where he understands that code can sometimes be complicated. And with it being him as the only user, I did not have to worry about a business critical problem since he was not fully using the app at this point.

The next day, I got another message with the same issue. So looking again, the code was there, the db records just didn't update. Took me some time, and a little Googling, but eventually reading something triggered that when you use a queue, and in turn use Supervisor to keep the worker alive, it hit me...the realization that php usually kills the chef after each request...and since Supervisor "keeps the worker alive"...it became clear...the worker was not reading the new code and was an instance of the application running for a long time.

This is the first thing that I want to iterate about working with queues...1. Be sure to restart your workers when you deploy new code. I have talked with other devs about this, and they also learned this lesson. So just be aware of it. Luckily, Laravel provides an easy way to do this with `php artisan queue:restart` and you can add that to your deploy script to ensure that the workers are updated when you deploy.

After this knowledge was made aware to me, it fixed the issues, the import scripts began running correctly and I no longer had any of this, seemingly, unexplained code "not working" issues.

MaxAttemptsExceededException in failed_jobs

The next piece that was a little tricky to figure out, was a lot of failed jobs showing up. And I think it was a little tricky because what was really happening is not really described in the main "MaxAttemptsExceededException" that was being thrown.

The project required importing a lot of data, like 22,500+ csv files. It's essentially four years worth of data for a particular industry. But this is almost an idea situation for a queue, each csv contains data, and each one is independent, meaning each file can be imported by itself and in no particular order.

So I created a script to create jobs for each csv, and then just let the workers run. After it was running for 24+ hours, I noticed these failed jobs, and the above exception.

I started thinking about the max_execution_time option that php has, but I was able to exceed that in CLI for some test import files, so I did not think that was the case. And I tried importing locally one of the failed csv, and it completed without issue.

So I began trying to think about what could be causing a problem, and then thought that there could be a timeout on how long a worker process tries to run.

Turns out, that is exactly what happens. It's in the docs, but this is my first experience with queues, and I guess I simply skipped this part of the docs.

In Laravel you simply need to edit your `config/queue.php` file and set the `expire` option for the right type of queue to be appropriate for how long the job might take. In my case the DB looked to show most imports completed in 45 seconds to one minute and a half.

I set mine to 1:30 minutes and deployed. But kept noticing that the failed jobs were still happening. A little more reading, turns out you should also set the `retry_after` value to match the `expire` setting. The `retry_after` setting tells the worker to try for X amount of time, before trying again.

I tried another csv locally and timed it and it took 1:36 minutes, which made it clear that the `retry_after` setting was causing a problem. I adjusted this setting, set it to four minutes, and deployed.

I restarted the entire import, all 22,500 files and let it run. After an hour, I had zero failed jobs, and this seemed to solve the problem. And this is the second thing I learned from my experience working with queues...2. Be sure your workers have enough time to complete the task given.

Failed_jobs table

Setting up a queue table is super simple in Laravel...`php artisan queue:table` will generate the migration for a basic queue table.

Which, I assumed would handle failed jobs. As you can guess, after running some test jobs, I assumed that things were working correctly, but then realized I didn't know.

Back to the docs, and I see there is another command you need to make the migration for the failed_jobs table. `php artisan queue:failed-table`.

Just be sure to run `php artisan migrate` after the two queue table commands and you will have the needed queue tables in your db.

Wrap up and general experience with queues

After sorting out the above, and considering this is my first time using queues, I think queues are awesome and suprisingly easy.

The above issues, were a bit my fault for not fully reading through the docs, but also feel like because I experienced the issues, I will not forget them soon.

I think the queue was the right decision to allow the user to get feedback from the system quickly, and not have to wait for the import to complete. This is a great user experience, and could be improved with a notification to the user when the queue is complete, but I did not see the need for that for the client.

I do think that queues are great, but that they are not needed for everything. Sometimes it's perfectly fine to have the user wait if it's only expected to be a few seconds. When you get into minutes, or need something to "happen in the background" queues are a great choice.

]]>2017-10-28T01:20:19+00:00Andy Hugginshttp://andrewhuggins.com/post/a-package-for-adding-a-helpersphp-file-to-your-laravel-applicationEver need to add a Helpers file for some global functions in your Laravel Application? You might know you can create a file and then use composer.json to autoload that file into your app. But if you are like me, you might forget the syntax, then you have to look that up when you want to create your helpers.php file.

If you use Laravel, then you are extremely familiar with adding a package from Packagist. So I created a package to quickly add a helpers.php file to your Laravel app.

It's ready for Laravel 5.5 coming out next week, and uses the "Auto-Discovery" feature coming in 5.5 that means that it really could not be simpler to add this package and continue working.

In 5.5 you just need to do `composer require ahuggins/helpers`, it will generate the service provider loader that makes "Auto-Discover" work.

In 5.4 and below, you will still have to update your `config/app.php` file and add the following `AHuggins\Helpers\Providers\HelpersServiceProvider::class,`

Then, you just need to publish the file with `php artisan vendor:publish --tag=helpers` and then you should be ready to go with a place to store global functions.

]]>2017-08-25T20:45:29+00:00Andy Hugginshttp://andrewhuggins.com/post/composer-package-development-for-easy-local-devComposer is a super useful tool in modern PHP, and there are times you want to make packages that you will use in many other projects. I often see people development within their applications, meaning they create their models, views, commands, events and more in the app folder of their applications...then when they want to make a package, they have to copy all those files, update the namespaces, and test that it all works.

But there is a much easier way. I put this video together to help explain how to set it up:

Here is a gist, which includes the important parts in order to allow Composer to use the local package for development:

]]>2017-05-31T00:11:21+00:00Andy Hugginshttp://andrewhuggins.com/post/future-reference-silly-cliI found this package before, thought it was a nice wrapper around Symfony/Console, making the syntax and initial setup a little easier/cleaner...However, today I went looking for it and could not remember it, so making a post for my own future reference.

It's really just a little bit of convenience wrapped around the already great Symfony/Console package. It makes defining a command a little easier, and then you can also just define a closure instead of creating a whole Command class if you don't want to.

I like this style, and think it is great to use to quickly build out a simple CLI application if you need to, so check it out.

]]>2017-04-13T16:38:15+00:00Andy Hugginshttp://andrewhuggins.com/post/eloquent-orm-chunk-method-and-very-weird-resultsFirst, let me say that this is not an issue with Eloquent ORM, but an issue with how Databases return results, and could be an issue with SQLSRV more than anything. However, I have not tested anything on MySQL or Maria DB just yet.

So let's establish the use case for `chunk` and why I am using it to begin with.

The client project has many users that have many related records...essentially like an Employer and Employees, where some Employers have up to 3k or more Employee records. Now you could query and grab all the records at once, but depending on your setup, you might run out of allocated memory for PHP. Which is where the `chunk` method enters. `Chunk` allows you to take your 3k results, and chunk them into sets of your choice, I chose 50. Then you can act on the collection of 50 records as you would loop over all 3k, but avoid running out of memory.

This is great, and generally works perfectly. And you should reach for it to make sure your applications run smoothly.

There are a couple pitfalls to be aware of though and thinking through why the pitfalls exist, actually can help you realize a little more about how relational databases work, or should work.

Chunk() and deleting records

I think a great example that is easy to see what is going on, is if you have many records and you want to delete them. Imagine your set, some Five Thousand records are in this set, and you need to delete them. You determine you can't simply get the the entire set because of memory limits, and you add the chunk method to accomplish your task.

You run your code, and then you check the count, and for some reason you are left with roughly half the results. You think for a moment, but why, I wanted it to delete all of the records.

You think about it, and eventually you think about what is happening in the queries, that the Chunk method introduced. Each time the Chunk method executes, it is grabbing another set of data. This seems ok, but you have to really think about the details of that sentence. The first chunk method, grabs "Page 1" (first 50) of results from the whole set of 5k...ok, no problem. But the second time the Chunk method executes, it grabs "Page 2" of the set that is now 4,950. This easy-to-overlook detail that the result set is changing for each Chunk execution, means that a page of results is being skipped each iteration.

The solution of adding a while loop around the query means that it will continue to execute until all the results are gone. A solution that works, even if it's not as clean as we might like.

The example above shows that you need to be cautious when using the chunk method, but the next example will really make you aware.

Chunk() and selecting records

The issue I experienced though was not as easy to figure out as the example with deleting, and that's because selecting records only (and maybe updating them) will result in the same number of records existing and therefore seems to work unless you really dig into the data. When deleting, you know that something is happening because your initial result set is reducing in size. One that stays the same, is a lot less clear as to what is happening.

Additionally, I was running the code in a local environment, a stage environment and a production environment. Locally, things were working as expected, where stage and production were inconsistent.

Working with a co-worker, the idea to inspect a few columns for the result set might prove useful. We created a script that would grab the results. echo an html table, as well as create an md5 hash of the data which would allow us to see if the results are in fact changing. Built it in local env, ran it a few times, the md5 hash was matching every time. This indicated that we are in fact getting the correct results from our query (Eloquent calls) and that our results were consistent.

We moved the exact same code to stage and checked our hash. The first time we ran it, we got something different than we got locally, and I will point out that the data in the two environments matched, because we had made a backup and restored the db locally. The data was identical. We ran it again, and got a different hash than the first hash we ran in stage. A third time, a fourth time, a fifth time, all different hashes than the one we got locally, and all different than each other.

Discussions were happening, we could not put our finger on why any of this was happening. We checked versions for SQLSRV, PHP, Apache, IIS*, and anything else we could think of. Our next plan was, let's get the raw sql query from Eloquent and see what results we get from the DB in a GUI interface. Removing any potential issues that PHP or Apache/IIS might be introducing.

Dumped the sql, replaced the prepared statement '?' and ran it locally. The results always stayed the same. We then ran it on stage, and the data initially appeared to be different each time we ran the query. We dug into the data...we had an ID field, so we could see if we were getting duplicates or not...and to our surprise, we were not getting duplicates...so we sorted the results and compared to local results and they matched.

So we were getting the same raw data back, but something was just off. Then since we had the data in the same order, we ran it through an md5 hash again, and the hash now matched our original hash locally that did not change.

And the "Aha" moment hit us

It seems the data is right, but the order is changing every time in the stage environment. We quickly added an "Order By" statement in the GUI DB program on stage, and with four executions of the sql, our result order matched every time. Quickly updated our Eloquent calls with an orderBy() call, and checked our md5 hash....matched every time. "Aha."

A Google search later, and our recent discovery is confirmed, the database returns results however it wants unless a specific order is requested.

But why is it an issue? Why does the order matter?

Thinking about the situation again, I would suggest thinking about the example when deleting records. The first chunk is deleted, leaving a set of 4950...the second chunk offsets by 50 meaning it skips the first 50 the second time around.

It might help to think about it more literally. Imagine you have a bag with 5000 stones in it, some black, some white, some gray, some green, some blue. The first 50 you pull out, will be a mix of the stones, but the second time you go to pull 50, since the stones are pulled randomly, you have to put the original 50 you pulled back into the bag. The second 50 are pulled, and it's possible you have re-pulled some of the first 50, while pulling some new ones. Then this is repeated 100 times because you know there are 5000 stones exactly. This means there will be some stones/rows that are pulled multiple times, while others are not pulled at all, which results in very wild/unpredictable results.

Now if you ordered the stones in the bag, a specific way (by color or id), then pulled the first 50, then the next 50, then the next 50, and so on. You would work your way through all the stones/rows in the set. No stones/rows pulled multiple times, and all stones/rows will be pulled.

Wrapping up

I did some Googling, trying to find out why this was happening locally and not in the stage environments, and was not able to find any conclusive information. Maybe there was a change between the SQLSRV versions locally and the stage/production version, but was not able to find anything talking about a change like that. I am also not sure if this is how all relational databases work or if it is specific to SQLSRV. Though if the number of rows are matching, but data is mismatching, then this could be the culprit.

* We were running Apache locally and IIS on stage/prod...circumstance of the project.

]]>2017-02-28T08:46:38+00:00Andy Hugginshttp://andrewhuggins.com/post/deploying-laravel-with-elastic-beanstalk-envDeploying on a Digital Ocean instance is pretty straightforward. Once you get your instance up, if you are using Forge, you can simply add your .env key:values by going to the server, then the site you want to edit, click the Environment tab, and then click Edit Environment button.

If not on Forge, you ssh into the server, navigate to the root of the site you want to edit, vim .env and add your key:values.

You might think you can just ssh into the Beanstalk server instance that was created and add your .env file there...well, that will work, but only until Beanstalk creates a new server instance. Beanstalk just throws away a server and everything on it when it needs to. So manually adding the .env file this way is not great because it will be gone the next time you deploy your code.

So we need a way to get our env values deployed each time Beanstalk creates a server instance.

This took me a while to figure out, I found some people who were trying to copy a file from S3 in a build script...but that seemed more complicated than I wanted. I kept thinking there has to be a way to set environment variables that is secure in Beanstalk, and there is!

It's kind of similar to Forge, but it isn't as easy to find. Go to your Beanstalk dashboard, click on the app you want to configure, then click on the Configuration link in the left side navigation, and you should see something like this image:

In the Software Configuration panel, click on the Gear icon. You will be taken to a page, scroll down toward the bottom and you will see this:

You can add your .env keys and values here.

How does this work though?

Yeah, I wasn't sure if this would work either, but to understand it you have to know how the .env values work in Laravel.

Laravel uses the phpdotenv package, when the app is created (on a request) it loads the values from the .env into the php global $_ENV and $_SERVER variables. So now the question is how is do we get Beanstalk to load them?

Well when Beanstalk builds your server instance, it looks in this list of variables you create and loads them into the $_ENV and/or $_SERVER globals too.

Looking deeper in the phpdotenv package, you see that the globals are overwritten if a key is matched in the .env file. So this means that we simply don't deploy a .env file on our staging and production environments which are run by Beanstalk.

There is a little trick though

I think Beanstalk can at times build new RDS instances too, although I am not sure if the passwords are retained to new instances.

The good thing is, AWS makes this really easy. It automatically loads the RDS credentials into the $_SERVER global for you, so you can access them like this:

Notice there is no default value being passed to the env() functions?, this means if they are not set the function will return null. Using the short ternary operator we can then tell it to use our RDS credentials.

Now whenever a new EC2 server instance is created, it will have access to all the correct environment variables...and if Beanstalk ever needs to create a new RDS instance, the EC2 instances will be updated with whatever password is being used at the time.

One last thing to note, all the .env values are treated as strings by the phpdotenv package, the env() function in Laravel casts a couple values to booleans (for example, it casts string 'false' to false...because php would treat it as true) so everything you enter in the Beanstalk environment interface is nothing more than a string. The APP_KEY value has base64:somecrazystring but that doesn't really mean anything when you are storing it. Laravel just parses the beginning of the string and looks for base64: which determines if it should base64encode the string or not.

Well, I hope this helps other people looking to do the same thing.

]]>2017-01-11T08:42:31+00:00Andy Hugginshttp://andrewhuggins.com/post/recreating-laravel-dd-and-dump-functions-withoutThese are two really simple functions, `dd` and `dump`, but it's because they are so simple that I think they seem so handy.

I've written about them before, but now I find myself on a non-Laravel project, and I really want them to help in debugging things in the application.

Added little hiccup, this project doesn't use Composer, so I could normally add the Dumper that Laravel uses, and add the `dd` and `dump` functions. So now what to do?

Well, since they are so simple, you can recreate them pretty quickly and easily. Something like this:

I wanted to keep the ability to pass any number of arguments feature, since I find that helpful to pass a string to tell me what I am seeing. I also like to dump through out code sometimes to see what something is at a given point, and sometimes I dump or dd a few variables in order to see more when debugging. This feature is why I used `func_get_args()` instead of setting any parameters.

I quickly wrote a first draft of these using a `foreach` loop, but recently reread Adam Wathan's Refactoring to Collections and wanted to try and not use a loop. I ended up with a different set of functions because I was trying to be DRY. Then I looked and couldn't really see a reason to not keep it as simple as the ones posted above.

This is a good reminder to write the simplest quickest code, then take a moment and consider a new technique, refactor if you can, then pause and think if it's really as simple as it could be. What I ended up with, I think, is some very simple functions that I can add to any project and they should not have any issues working.

I do prefer not having a loop, cause that was a bit of a mess, but in this case, trying to be DRY...I was trying to reference `dump` from `dd` and it just seemed a little more complicated than just rewriting the single `array_map` line in both functions. So I just left with the `array_map` line repeated, and it's fine.

This version does not allow for the nice "tree-like" navigation that the Dumper class provides in the Laravel one, but in a pinch, these will work and sure beats typing out `echo "

This is a good channel if, like me, you are still learning and have minimum exposure to other languages. I find it helpful to see some of the same programming concepts I am familiar with in PHP, explained by a knowledgable coder in JavaScript.

Check out this video as an example:

Or even how to begin functional programming in JS:

I know I will be working through some of the backlog. If you know of more good resources, please tweet them to me @andy_huggins. Thanks!

]]>2016-10-16T17:00:29+00:00Andy Hugginshttp://andrewhuggins.com/post/great-example-of-refactoring-mathias-verraesI followed a link to an example of refactoring and thought it was a really good example. Shows the basics of refactoring:

Name things so that they make sense

Extract to named methods to make things easy to read

Remove levels of indentation

Write tests to prove the success/continued functionality of the app

Continue extracting

I really like that Mathias Verraes explains his thinking while he refactors the method, and the Q&A that follows.

]]>2016-10-02T17:06:49+00:00Andy Hugginshttp://andrewhuggins.com/post/crontab-disappeared-helpAs most of us work, we get used to typing things and usually assume that we will never make a mistake right? We, and I definitely mean me, should probably slow it down just a little bit and be sure to read the commands we type into the terminal before slapping that enter key.

I recently encountered an issue where I needed to edit a cron job command, so of course I logged into the server, quickly typed `sudo bash` entered my password, then typed `crontab -r`....nothing happened, was just given a new prompt. Retype `crontab -e` and Vim opens, but the file is empty.

What the heck?

I logged out of the server, re-logged in, thinking that maybe I somehow messed something up along the way. I typed `crontab -e`, Vim opens and the file is empty.

That sinking feeling sets in, "did I somehow delete the crontab...how does that even happen?" And immediately, I think "why would it not prompt you to ask 'are you sure?' when you could potentially delete the crontab????"

I asked a co-worker, who looked in the logs...I should have thought to do that, but didn't. He says "looks like the crontab was deleted."

"SHHHHIIIIIIIIIIIITTTTTTTTTTT," I said.

I look back through my terminal history and see that I ran `crontab -r` (did you notice it above?) Yeah, that deletes the crontab for the user you currently are, in my case root.

This is particularly annoying considering that 'e' and 'r' are directly next to each other on the keyboard...I wonder how many hours have been lost by this?

Luckily, in this case I only lost two jobs that I was familiar with and could easily rebuild...sidenote, if you need to recreate your crontab, you might be able to do some reverse engineering from the `/var/log/cron` file. There you basically get a log of what the cron runs and when.

You could also....and very much SHOULD...create a backup of your crontab just in case this happens. You might even mention it in the README of your project or keep that copy in your version control repository.

If you have control of the server, you could also alias `crontab=crontab -i` for your terminal...this prompts the user if you try to delete the crontab. Which in my opinion should be the default behavior. But at least this way you could enable that functionality if you want to.

Thought I would mention it in the off chance someone else experiences this, and by writing this, I will be more mindful of what could happen with the smallest of typos.

]]>2016-09-28T16:02:04+00:00Andy Hugginshttp://andrewhuggins.com/post/gmail-address-tipsThis may seem like a boring subject, but these little tricks can actually provide some nice little benefits that you may not be aware of.

The dots don't matter

If you're email address is the.fifth.of.november@gmail.com you could use thefifthofnovember@gmail.com or any variety of dots in the handle of your address. They simply don't matter. There are some services, like Twitter that do not allow dots in the username, but Gmail does not care.

That one was light, the next one might surprise you.

You can add a + and what follows does not matter

We probably think that our email address has to be exact, but that does not appear to be true. Gmail allows you to add a + then anything you want @gmail.com and it will still be delivered to you.

For example, if your address is iamdeveloper@gmail.com you can do iamdeveloper+iamthebest@gmail.com and messages sent to this address will still be delivered to iamdeveloper@gmail.com.

Ok, so who cares?

It may seem trivial, but let's say you want to setup a fake Twitter account. You are required to have a unique email address in order to set that up. Now you can make an address on the fly. Let's say you have your address: dhh@gmail.com. He probably wants to use that as his primary email address for his Twitter account. But let's say he wanted to setup a Twitter account for his business, Basecamp. He could (and realistically probably would) set up a new email account and then setup a Twitter account. But if he wanted, he could go to Twitter, click to create a new account and when he was asked for email, he could enter dhh+basecamptwitter@gmail.com. This would still go to his primary inbox, and it would allow him to create a new Twitter account.

But that's not all

Think about other ways you can apply this. Let's say you are signing up for a new service. If you entered a unique email address for each, it would make it even harder for someone to take your credentials from one service and use them to login to another. Granted you shouldn't use the same password for them, but in case you did, you could at least have a different email address.

Now, that becomes interesting when a company starts selling your email address to third party companies. You could actually see who sold your email address if they don't scan their address list for this. And then you know exactly who to be pissed at.

This would also let you easily setup filters in your inbox, or block spam messages.

]]>2016-09-15T22:27:56+00:00Andy Hugginshttp://andrewhuggins.com/post/the-manager-is-a-coach-fallacyLet's establish the general conceived idea of a manager.

A leadership role

Makes decisions

Creates plans for team

Works with team to execute

Making the analogy to a basketball team, I live in Lexington, Ky so bear with me, I think a lot of people would say the above criteria is met by the role of the coach. They are the person in control, they make decisions, the make plans, goes over the plan with the team during practice in order to execute the plan.

I do agree that, it does seem to be the best fit. But this paradigm has issues. It fits the corporate structure of "top down" but I don't think it really works, or at least not well.

I like to think of the manager as the point guard on the court. Controls the ball, makes decisions, executes a plan by getting things in the right position.

I really like that in the point guard example, they are part of the team, they are participating...where a coach is on the sidelines.

The point guard fits the role of manager...how many times have you heard an announcer say "He's a point guard that really knows how to manage a game." Some point guards can really lead a team in scoring, think Magic Johnson, but they also typically have a lot of assists because they typically control the ball and can work to get people in the right position to score, or they can get the ball in the right position to setup a scoring opportunity. Meanwhile, the best coach in history will have zero points in a game.

I think the idea of a point guard managing a game, working to get his teammates in good position, making passes so players can take an open shot, or get an easy layup. Those are the managers I want to work for. A manager that feels they are a coach, not a player, says it all for me. A coach isn't on the team, he coaches the team. And that is a big difference. It doesn't necessarily fit the corporate structure, but maybe the corporate structure is outdated, or been steered off course.

Think about this, if you have an open office where the "team" is...is your boss/manager in the same room? Or is the manager off in their own office? What's the point of this separation? Makes you question things when upper management says "team" doesn't it? I know when I played on a team as a kid, all the players on the team played, the coach did not play.

]]>2016-08-18T17:01:20+00:00Andy Hugginshttp://andrewhuggins.com/post/add-markdown-to-quick-look-previewIt's probably a really small tip, but something that annoyed me a couple times. We use Quick Look (I think that is what Apple calls the feature) all the time to preview files, especially text or html files. So wouldn't it be sweet if we could do that on Markdown files, since we usually have a Readme.md file in most of our projects. OR, I like to create a `docs` folder in most of my projects as a place to store notes about the project in markdown format. So wouldn't it be sweet to simply hit spacebar and get a quick look at it? I think so. So let's add that!

]]>2016-08-15T18:48:32+00:00Andy Hugginshttp://andrewhuggins.com/post/automatically-switch-folders-on-vagrant-sshThis isn't the biggest deal, it's just a little convenience that makes working with your vm a little easier/nicer.

So what do we need to do?

First, you will need to

vagrant ssh

Once in your vm, you should be able to do a `ls -a` command, which should list all the files, including the hidden ones. We are looking for either a `.bashrc` or a `.bash_profile` file. You will want to edit this using Vim. See Laracasts Vim Mastery for info on how to use Vim.

We are going to add a line to the `.bashrc` file (or .bash_profile if that is what you have), we want to add:

cd ../../var/www/

This path should be customized to the vm you are using, but it most likely is something like above.

Save the file. Now you should be able to `exit` from the vm, then `vagrant ssh` back into the vm and be in the folder where your code lives.

These quick little helpers help to remove just a few steps from working on something, this allows you to focus on what is important, which is what we should all be focusing on.

]]>2016-08-02T18:43:39+00:00Andy Hugginshttp://andrewhuggins.com/post/manage-your-windows-with-spectacle-appI use the Spectacle App everyday while I work on code. I find it so useful I am surprised that Apple has not provided something similar. Anyway, if you are interested in easy window management, watch the video to see how it works and then go get the app for free.

&lt;br&gt;

]]>2016-07-06T16:46:29+00:00Andy Hugginshttp://andrewhuggins.com/post/want-to-be-a-web-developerAs I think about how to write this article, I am leaning toward this just being really good life advice, so understand that some things will apply to things other than web development.

Web development is a hard job. It takes years to build the skills in order to even really begin contributing. You will be met with tons of failures, tons of hard problems to solve, tons of “why the hell doesn’t this work?”, but it also has tons of rewards, and moments of “this works and is so freaking awesome!”. It’s a wild ride.

What I want to focus on, is the person who may be thinking about web development or programming as a career. While web development is a certain part of the overall developer world, most of what I will cover will apply to any development/programming job.

So let’s talk about the journey. When starting out, you will learn the intro stuff, you may learn it fast, but then you will get into some nitty gritty. Let’s talk CSS as an example. It’s a deceptively tricky language. It seems so simple, you just declare properties on things and it’s displayed. But then you want to position things in complicated ways, and you may hack it with a float, or position absolute. But then you have to account for another element and before you know it, it’s tricky.

Then let’s say you move into a programming language like Ruby, PHP, Python, C, Java, Javascript, any of them. You learn arrays, variables, control structures, loops, and you get a handle on it and you think you can do anything. Then you learn that things are insecure, and then you have to escape everything, then you have to prevent users from being able to pass code, maybe you hear about Object Oriented coding and look into it. It seems simple, but then it also seems complex.

Then, in the web world, you have to mix the code your server side application creates and you have to code the front side of the application and make things work together. Things just got extra complicated. And if you want to be a full stack developer, you have to understand the server, all the settings that come with it.

Not too mention using Git, or whatever version control system, and before long you start to see that this is really hard. There is a ton to know and learn and it will take you years to gain competence, then if you want to master things, you still have a lot to do.

But right now, you are at the beginning of this journey. Think about why you want to do all this work. What are you going to get out of it? It’s a great career and I enjoy it, but your time is valuable.

This is where this applies to life in a bigger context.

You are trying to get started and let’s say you are young. You don’t have a ton of money, but you are interested in pursuing this crazy journey. What would I recommend?

Think of everything as an investment.

Everything.

Time is extremely valuable.

Get a computer, a good one. Don’t go crazy and spend $5k or anything. For web dev, you don’t need crazy amounts of processing. So most computers out there can probably handle what you need. But a good one in terms of OS, reliability, and user-friendliness. Do research on these things and compare. The stores list processor speed, RAM, and hard drive space, but that’s not really what’s important. How usable is the computer? Remember, you will be using this computer a lot. Like 8-12 hours a day, probably more than five days a week.

I say 8-12 hours and probably more than five days a week, because most devs I know work on things on their own time. It’s not all work, we just enjoy building things.

Because you will spending so much time on it, this is where the investment idea comes in. If a computer/OS is easier to use, or things work more reliably; that saves you time. And the fact that we are talking $1,000 - $2,000 over the course of probably 3 years, the difference is small. If something is slow, or you spend hours figuring out how to get something to work on your computer...that’s hidden cost.

Good news, other than software, that’s pretty much the only tool you need.

Software & Code editors

Test out many code editors. Again, you will be spending tons of time in your editor. Like tons. Get to know it, learn the shortcuts, optimize your workflow. Also, buy it. Don’t pirate it.

This is something I feel strongly about. Other developers took time to build the tool you are using, their time is worth money. Most editors are under $100. So if you use it for a couple years, the monthly cost is practically nothing. And I would feel weird expecting other people to pay me for my code or to use it, yet stealing other people’s code to make my code. Why should anyone pay for your code?

Some suggestions, Sublime Text, PHPStorm, Atom (which is 100% free, so use a free one if you do not want to pay for one).

Alright, I am getting a little off my intended point behind this article. Let’s get back on track.

How do you learn this stuff?

There are great resources out there. Think of things like an investment. I am a big fan of Laracasts. It’s a $9/month subscription service that gives me access to ~800 tutorials, with more coming out every week.

The most important thing about that last sentence. It’s $9!!

Also, if you sign up for a year, you get a discount, if you follow the Twitter account, you can get a coupon. This can get your subscription cost to under $60. That’s $5 per month. Seriously, that’s the investment of a lifetime. Information and training for $5 a month. If you hesitate on that deal, you really need to rethink your investments.

But this applies to other training or even books. Obviously, don’t go buy every book or training available. Do some research, Google the author. You will find people you feel have a good knowledge base, ask them which books they have read. Then just buy them. And read them.

Let’s say the average book is ~$40. But if that book teaches you a technique or understanding that improves your code, which could save you hours in the span of a month. That’s a good investment.

A video game might be $40, so you could invest money in you or the video game. Movie tickets for a date probably $20, you could pick up a book or a training package.

Do not only spend money on training and books...go have fun people. The point is to be aware that everything should be considered an investment. Sometimes you invest in your short-lived happiness, like a movie, but saying books are expensive while buying a Starbucks every day...you could easily not have the coffee once or twice each week and afford the book.

The same applies to the personal projects or learning projects you work on. These things take time, build something that might have some potential of returning on your investment of time. Building something because it’s “just cool,” sometimes sure go ahead, but there isn’t enough time to build all the things you want to build, so you really need to evaluate what you invest your time in.

Investing isn’t only money...be aware that your time is just as much an investment.

Everything is an investment, think about your daily life and think about what you are investing your time and money in. Don’t get crazy, sometimes you just need to blow some money just because. Take trips. Whatever you enjoy doing.

Just remember that some things are an investment in yourself, and you should be investing in yourself often.

]]>2016-07-01T18:28:20+00:00Andy Hugginshttp://andrewhuggins.com/post/a-tip-for-cover-lettersSo here's the problem as I see it:

You decide, for whatever reason, that you want to find a new job. Unhappy with current job, looking for a better opportunity, more money, bored, gained new skills, whatever. You do what most people do, you look for job postings that resemble the position you are looking for, get your resume in order, maybe write a cover letter, and ultimately apply.

Simple, right?

Two weeks later you get a call, schedule the interview, go to the interview, one week later you get the job offer.

Simple, just like that right?

Never. Well, maybe for some.

I have often felt that I could be doing more, that I have more to offer, that I could do well in a position with high responsibility. But I have yet to really apply for one since I feel that I don't have the background to support it. However, if people only got positions they had a background for, how are higher positions ever granted?

That question is what I usually dwell on. How do you get the opportunity to get the experience? Chicken and the egg right?

I thought about the last job I applied to, I had my resume, other things, maybe a cover letter. But what about these things stands out? The resume is pretty standard. Sure you could have a slicker design, or maybe you have some awards that you list to tell the new company that other people think you're special. But the resume is just not where you are going to stand out. It's not really intended to. It's a quick breakdown of your history/experience. It's "what you have done."

The cover letter though, this is where I am starting to see the opportunity to really promote yourself. I am starting to think of it as the "what you can do" or even more specifically "what you will provide to the company."

I'll admit, previously, I kind of felt like the cover letter was a waste. But with this new approach of promoting yourself and telling the company what you will provide, it has a much more succinct purpose to me.

So let's go over some Do's and Don'ts

Dont's

Don't say the same old boring shit.

"I have a meticulous attention to detail." - boring

"Previously I have worked on X, Y, Z with Acme company." - oooh wow

Don't regurgitate history details from your resume.

Don't bash previous boss/employers/coworkers.

Don't just change the name of the place you are applying.

Do's

Talk about what you can do.

"I saved X amount of time/money by implementing A, B, C" - results

Talk about personal growth.

"I researched X, presented to my boss, and was able to reduce bug reports by 10%" - more results, initiative

"I attend local meetups to gain exposure on new topics, in doing so, I was able to suggest solutions for new problems."

Talk about desire for more responsibility

"I am really excited about the potential of X, and I feel it can provide a great value to the company by increased productivity and cost savings." -

Do look at company information and try to suggest things you do not see them doing. Be sure to do it in a polite way.

If you feel like you could really make an improvement, pull out the big guns.

"I know the position listed is for X, looking over your website, I see an opportunity that I would love to discuss with you that has returned great results with my current employer."

The idea is to express a greater interest than simply what they may be looking for. Create an interest in talking to you for an interview. Telling someone what you can provide is a much better approach than telling someone what you have done or where you have worked.

Installation

It's pretty straight forward as most Composer packages...simply run the following command in the shell:

composer require ahuggins/utilities

Composer should bring in the latest version, which if I hear a good idea I will gladly add it to the package.

Then you need to add the following to the config/app.php file in the providers array:

AHuggins\Utilities\Providers\UtilityServiceProvider::class,

Once that is added, you should be able to run `php artisan` and see the `utils` section in the Command list.

As of the date of this post, there are two commands: CreateUser and UserPassword.

CreateUser

To run the CreateUser command, simple run this in the shell: `php artisan utils:create-user` and answer the questions.

UserPassword

To run the UserPassword command, use the following: `php artisan utils:pw` if you know the id of the user you want to edit then do this: `php artisan utils:pw 42`

If you do not provide an id, then a table of users will be shown. This is intended for dev use, which normally will have only a few users in the db. If you have tons of users, it might be cumbersome to edit a user (or additional work may be needed for this package). Once the user is determined, it will ask you for a password. Type it in and hit enter.

When typing the password, the input is hidden. This is to help prevent anyone looking over your shoulder. Just know that there is no visual feedback on the password fields

If you want to see this in action, here is a screencast of the package being used:

]]>2016-06-24T07:54:37+00:00Andy Hugginshttp://andrewhuggins.com/post/creating-a-composer-package-from-existing-projectI am intending to try to make more screencasts and blog posts about things that I feel are not well covered. Creating a Composer Package, while this screencast is not all encompassing, tends to be one of those things that people use, but often do not create. The following video is me going through the process for a simple package that came to mind the other day (see future blog posts :D).

Here's the video, but the content after the video lays out the basic process.

Create a Composer Package

Step 1 - Composer Init.

When creating a Composer Package, you need a place to store the contents of the package. Generally you can add this to an empty folder wherever you normally store your projects. Create a new folder with a relevant name.

Inside this folder (assuming Terminal), and that you have Composer installed already, run `composer init`.

The `composer init` command will walk you through the bare minimum of things you need to create your package. It asks you for a vendor/package name, a description, minimum-stability, dependencies as well as some other basic information.

Step 2 - Create Meta files

Additional to `composer.json` most packages come with a `readme.md`, a `LICENSE`, `.gitignore`. Fill in the appropriate license content as well as information in the Readme. Things like installation notes, version information, basic usage are all good things for the Readme.

Step 3 - Create Folders

Generally the code is stored in a `src` folder with additional folder structure within it. Frequently another folder is a `tests` folder.

So at this point your project should look something like this:

Step 4 - Add the Code

Add the code in the appropriate folders

Step 5 - Make sure your Namespaces are correct

In your project you probably added a namespace, but it's probably going to be a little different for your package. You need to make sure they are correct to avoid any "Class not found" errors.

Step 6 - Add Autoload information in the Composer.json file

This is really important. This tells Composer where to look in order to autoload your packages files. Without this, your package isn't going to work.

Step 7 - Update the Readme with appropriate installation information.

I mentioned this before, but it really is important to tell potential users how to install and use your package. The easier it is for them, the more likely they are to use it.

Add to Github

Once you have the steps above complete, you need to add the contents to Github. This tutorial isn't really about Git and Github, so the quick and dirty steps to add to Github:

`git init` in the project root

Create a new repository on Github (it will provide directions on how to add your new repo to Github).

`git add .` to add everything to the staging area for git (this is in the project root again)

`git commit -m 'good message'` creates a commit

`git remote add origin path/to/git/should/be/copied/from/github` tells git where the remote location is located

`git push -u origin master` pushes the local git to the remote

That should be pretty standard stuff for working with Git/Github, I might do a series on Git if I have time.

Now the next steps on Github involve publishing our package to Packagist. View the repository page on Github and click on "settings" then click on "Webhooks & Services." Then click on "Add Service" and type in "Packagist." You will need to add your Packagist username (NOT EMAIL) and the token from your Packagist account.

If you do not have a Packagist account, go create one now and copy your token.

Once this is created, we need to Submit the repo to Packagist.

Add to Packagist

Go to packagist.org, and after you login, click "Submit" in the top navbar. You will want to copy the git address from the Github repository page and paste it in the `Repository URL` field.

This should connect your repository to Packagist.

Now install the package and follow the steps in your readme and verify they are accurate and work. Make any adjustments if needed.