In the last segment, we setup a database and reviewed all the code that interacts with it. This time, we'll setup authentication so we can lock down the security of the system and the database, and add a few user-centric capabilities like editing and deleting comments.

In the last segment, we discussed how we should do data modeling and arrived at a possible data model to use with the database. This segment, we'll walk through how to setup the database and connect your frontend to it to start saving data about the map.

In the last segment, we implemented a simple map user interface we could use to find certain areas on the map and show markers for them. This time, we'll look at connecting a database to the backend, allowing us to click on the map to add comments to a latitude / longitude coordinate on the map, then allow others to see all the data that's been added to the map. We won't implement login yet, but leave it open for now.

In the last segment, we setup an initial empty Firebase application and got environment working. Now we'll add a new frontend to the application to show a simple map view and let us search for our current location and other useful things. These instructions will walk through updating the frontend to add a free map control and have it do some searching for us.

This series of posts describes how you can build up a simple web application hosted on Google Cloud to allow users to store extra information about locations on a map. It's meant for people who are just getting into JavaScript and have at least basic knowledge of how it works. When there's a more advanced concept, I'll try to explain it a bit more fully. The goal is to have a simple application that you can build and host that you can think of as a "real" application.

When I first started using .NET Core, all the examples online for applications were for web applications using ASP.NET Core. However, I wanted to start simpler and build a simple command line tool. This required a LOT of trial and error in manually setting up configuration, logging, dependency injection, and tearing everything down appropriately so that all the logging output would properly get flushed. I kept that example around ... somewhere ... and had to refer my coworkers to it a few times over the year. Now with .NET Core 3.1 and the new Generic Host, we can get a simple (but fully featured) command line application built up following Vertical Slice Architecture.

In the previous post, we refactored the backend of the template application into a "Celery" (or "Vertical Slice") architecture with feature folders. For the frontend single-page application (SPA), we'll refactor in a few simple ways: upgrade the dependencies, reorganize the file structure, and introduce React Hooks.
This post assumes you are familiar with the basics of React and can follow along. If this isn't the case, you should run through some of the tutorial materials first. The ones on the React site are a great starting point to get you moving, and you can probably follow along by reading my code in GitHub as well.

In this episode, I take the template from last time and apply a "feature slice" strategy to
the C# side of the pool, demonstrating a simple use of MediatR. I also dip my toe into the
wonderful world of Nullable Reference Types in C#. If you're looking to apply a modern,
scalable organizational strategy to your applications, this is a good place to start.

.NET Core 3.0 is out and it's chock full of new features to get your application off the ground faster. I'm working on a greenfield application for managing a small church. As I work through these, I'll talk through the development process, the settings, take screen shots and point out any salient code snippets. I'm using this as my vehicle for learning more of the new ins and outs of .NET Core 3.0 and React 16.8+, along with demonstrating ways to manage infrastructure as code and deploy everything in Azure. All the code will be kept open (forcing me to make sure I don't commit anything dumb like passwords or my credit card number). This post starts from the basic template and makes some initial improvements and upgrades to be more productive.

I've been super excited lately about building JAM stack sites using [GatsbyJS](https://gatsbyjs.org), and I recently setup a way for our church website to be updated from Facebook, Office 365 calendar, and static content automatically, hosting them on Azure simply.

Down in Monterrey, Mexico, I did a talk for Headspring Talks entitled "Getting Started with Docker and Kubernetes". The goal was to help people who are starting to learn these incredibly powerful technologies a good kick-off point. When I was learning about these things, I knew that there were certain concepts that helped me internalize the way that Docker and Kubernetes work. I tend to not memorize facts or details very well. Instead, I focus on the core concepts of how something works, then derive back to detailed knowledge from there. I make an educated guess at how something *should* work, and that helps me solve problems more effectively.

As is tradition, the first post on a blog should talk about how the blog was created and setup. However, against tradition, I did not build my own blog site from scratch. Instead, I was looking into [GatsbyJS](https://gatsbyjs.org) and JAMStack style generators, and I really liked the plugin approach. What tickles the software developer in me is the GraphQL based query system that's baked into the core of the system. It allows you to add levels of abstraction to the content that you source through plugins, and query subsets of the data at compile time that you can then render into views at browse time.