Browsed byAuthor: Gregory

Recently we ran into an issue where we were required to get into a E3-16F with a console cable. This was due to a downgrade from R3.1.4 to R2.3.3.2. Such a downgrade causes the updated configuration file to not be read correctly and throws a running suspect alarm in the unit. The unit boots with no problem, but it was totally inaccessible from any interface. The solution was to use a console cable. Calix does not provide console cables, nor do they provide a pin out of the correct one to use.

The requirements they state are:

115200 Baud

8 Data bits

1 Stop bit

No flow control

Further an RS-232 DB9(Female) to RJ11m or RJ12 connector is possible, but no pin out diagram exists. Luckily the DB9 only needs three wires. TX/RX and a ground. And a RJ11m only has four usable pins, so it just becomes a process of elimination. I have attached a diagram that shows how to wire it up, so if you need to purchase one or make one yourself, you can know how or what to get. Since there is no standard for this kind of connector set up, it seems crazy for Calix to not put the pin-out on their site.

It’s been over a year since my last post, and a bit has happened within that time frame. I wrote many blog posts in the past year, but they were never fully fleshed or worthy of being public accessible.

1) I graduated with a BS in Business Information Systems. My last two years at university were some of the best. I finally found a degree I enjoyed and was good at. Made some good buddies, and helped me find myself in a way.

2) I moved to Omaha Nebraska for short period of time. This is a tough one, I spent about six months living on my own and learning nothing but life lessons. I work at the Union Pacific Railroad as a Project Engineer for the UI section of a PTC web application. The whole experience was eye opening in a lot of ways. I won’t say it was bad or unhealthy, but it really showed me what I desire in a job and where my values really stand. Sounds simple, but it took me half a year and moving 1,600 miles from home to figure out. I did have some moments though, I must say I really love trains, but not the railroad. SandyNet accepted me back as one of their own upon my return to Oregon.

3) I got a cat. He is adorable, and also very annoying.

4) I am now the Assistant Director at SandyNet. I finish with this, since I have picked back up on FMS development, and have already incorporated new technologies into the existing builds. UP gave me hands on experience in designing modern web applications, even though they are 10 years behind everyone else. Suddenly all of these things I learned in college are true, and I need to pay closer attention to how I write code so that it can better weather the inevitable invasion of entropy.

With all that said, I have very briefly closed the gap of what I have been up to. Here’s a list of things I have continued to learn and grow upon:

Last year Calix released Activate under their compass suite. Activate was branded to be the next iteration of SDN for their AXOS enabled products. At first I was told that Activate was going to eventually take the role of what CMS currently does, but as it stands right now, is only used for managing E3 and E5 G.fast devices. Which is why I had to start using it.

Late last summer we got a shipment of about 20 E3-16F units. These units are designed to provide gigabit speed over twisted copper pairs within a limited distance.

I totally jacked this image from this site. Anyway these units are suppose to be the best solution for MDU (multiple dwelling units) deployments as they can utilize the existing copper infrastructure (since all buildings have phone) and reduce build out cost for ISP’s. This idea is sound, but there are a lot of variables that will determine how fast of a connection can be provided. Obviously distance, quality of line, shielding, vectoring, frequencies, etc are all factors in determining throughput. It brings all of the complications of DSL into smaller environments! It has just as many potential variables that must be considered when implementing. 😛

A big project at work has been getting these things ready for deployment. They have been sitting long enough and are beginning to gather dust. Not to mention, depreciate. So while I have been focusing on finishing my degree at college, the team has been getting fiber connectivity to the comm rooms of some apartment complexes. I have been attempting to get the FMS ready for activate integration, and some of its key parts upgraded. A basic module level list is below:

Plant Management

So I started on the Plant Management. I created this object called AS or Access Systems. Access Systems are any kind of device that resides in the field or needs to be placed on the map. For example, and E3-16F needs to be connected to the fiber PON system, but isn’t a terminal, because it does not simply pass through connections. Let me sidetrack and vent about definitions and terms in the telecom industry. Some terms are loosely used, and often over generalized while others are quite rigid and must be used properly. Terminal is a term used somewhat loosely. I have seen it used in conjunction with pedestals, or other enclosures for any medium. I have had to set a more rigid definition for when it is used in the FMS. Terminals in the past for the FMS has always been the bridge between infrastructure and customer premise. Which means if drop cable connects to the infrastructure in a utility easement, it is connected to a terminal. Or if it on the side of a building, and has multiple customer drops going into it, it is a terminal. The weird thing about the E3-15F’s is that they do conform to our definition of a terminal, but they also do not. Sure they connect customer drops to their premises, but they are also fed with one line, and do not have a 1:1 connection ratio. It is 1:16, and on top of that it uses two different mediums, so how beneficial would it be to look at a map an see a T instead of something like AS. At least you could know it was some form of infrastructure and not a ped in a comm room (huh?). Plus they are not passive and require power. Defining them as something other than a terminal allow us to hold a lot more attributes. Let’s dive into the details of how I made this stupid thing work. Access Systems were created to work in conjunction of other devices. Currently only Activate is used for the access systems, but It was designed and built where we could use coaxial or other systems in the future. This required me to create yet another module called devices, which will eventually hold all of our network nodes.

This list sync’s with Activate on an hourly basis, checking for name changes and IP addresses changes of units. This is important to ensure that records are not orphaned. So once an hour is a good time interval.

If someone were to click the details page of the device, they are presented with device diagnostics

The details page is designed to provide an overview of the unit to the user so that any problems can be quickly seen. Reboot and Force Synchronization buttons exist and port details can be expanded to show which service are provisioned on the port. Alarms are shown to highlight attention.

Lets dive into the structure of Activate. This took me a little while to reverse and understand but hopefully the following graphic will help others understand how all the parts of activate work together. Bear in mind that this may not be the exact way things are stored, but is the best I could do from only looking at the web interface and API.

Basically two independent pieces are created. The Node record and Subscriber record. In the FMS, it is essential that that customer be linked to the node record, regardless of system, so I ended up spending a bit of time trying to find the structure so I could query any part of activate and eventually get back to the subscriber.

Subscribers are linked to Service Configuration records, which define what type of service to deliver to a port. Data, Voice and Video are all options. Service Configuration records are linked to a service template, which can be built in the activate interface under services. Refer to the Calix Activate documentation on how to configure these.

Ports are linked to a Port Configuration record, which defines what port level settings are assigned to the port. in addition Service Configuration records also associate with a port, so this is where everything really links together. The image below shows where to find the template settings for ports and services

Pretty much anything else regarding the Activate structure can be found in the Calix documentation. Lets move on to drops and how this structure can integrate the two systems.

Drops

Earlier we defined drops as a bridge between customer premise and infrastructure. This is true. In the case of G.fast, it is the copper line from the Access System to the CPE. The drop starts at the demarcation box and ends when it is plugged into a network terminal. Now modification of this module took a lot of time, and I’ll spare you the details because a lot of it was revamping the old code ( a lot from the first version of the FMS and some from the previous system before the FMS). Types were defined, so copper or fiber could be specified, and depending on the type, the logical cable spanned from the address to terminal or access system.

Within the FMS, we now have all the links set up. Customers are linked to drop via address. Drops are linked to access system by plant management record. And customers link to Activate through subscriber customId. Now we have all the pieces to be able to connect activate and work between the two systems!

Customer

Up until this point in time customer records only contained device links to equipment (inventory module). With G.fast, devices are no longer assigned to customer, but rather ports were configured with services. So a new table was created to assign both ports and customers to units.

Next, to add a port, a drop (completed) must exist that is connected to an access system. Looking at the image from the drop section, the access system and port do in fact match and can be added to the customer. By pressing the add button, a provisioning wizard shows up. There is an error message because I already had service provisioned for the unit.

Once service is provisioned, the network terminal can be plugged in and it should start working. In addition, 801F’s and 844E’s can be assigned to customers and provisioned remotely.

That pretty much wraps up this post. It has taken some time, and the official deployment of the G.fast units has not occurred just yet, but it should be coming soon, and we’ll see what I have made holds up. I am currently making the system more robust and implementing features here and there. Some issues with Activate connectivity to the units has been causing some frustrations, so we’ll see how this thing plays out in the long run.

Finally I want to drop a link to the Compass Activate PHP Class for anyone who wishes to tie into the thing. The repo is here. It is still under development, so little to no documentation exists yet (5/22/2017). As always if you have questions please feel free to email me at gbrewster@agoasite.com

I haven’t made a post since late October, and I felt one was needed. It won’t be long, I don’t have time right now, but I can give a quick update on what has occurred. I am currently in the winter term of my Senior year, and I am hoping to be done by June, 2017. I am taking Systems Analysis and Design, Networking and Telecom, Project Management and Marking. :-/

Given that, my gears are shifting. I am doing more and more developing than ever, but I am also now the lead person for the FMS (only person I guess too). What I mean by that is I am now building modules and functionality for some companies and organizations that are interested in licensing what I have made. I have had to swap hats multiple times within a given day and focus on the business side half the day and coding on the other half. I am currently working more than ever, and I only see myself getting more and more buried under the exponentially increasing work load. Back in late September, I finally made the switch to Git and revision control. My only regret is not switching sooner. I am finally able to separate my dev and production environment. In addition, a lot of changes have been happening on the plant management side. I have purposely left out that module from the blog because it is such a huge module that is constantly changing.

Work order, locates and tax lots all tie and rely on the plant management module. And the integration is endless. I don’t consider myself a GIS guy, and it really shows here. When I was first racking my brain, trying to come up with a solution to be able to generate a web based Fiber GIS suite I had to gather some basic understanding of GIS. One issue that quickly came up was getting data to the client effectively. I started to look into GIS based DBMS’s. This returned PostGIS and Postgres. Well I wrote the FMS in MySQL. So that began the design of keeping two separate data silos that had to stay in sync. It is not a perfect system by any means, but it is manageable. Basically, the FMS syncs with a GIS server, running an instance of an open source GIS platform. The FMS allows for a bidirectional sync between the two systems. Clients pull records from the FMS and the GIS system,and data is displayed and manipulated.

This basically laid the groundwork for what is now a functioning, critical part of the FMS. It seems that that possibilities of finding relationships in data becomes endless when it is all central. 🙂

Note: I have not reinvented the wheel by any means. This concept has existed in many other plant management based programs. The schematic above was a concept rather than finished product. The structure is much different and way more complicated.

As you can see, I am tying in the last post here. Plant management ties into the calendar and work orders, which ties into the field tech console. Tracking and recording any and all operational, customer and OSP operations has generated extensive sets of data, many of which may never find a use.

On another note, we have not yet gotten our E3’s deployed to MDU’s. SandyNet has just started it’s expansion to the business district, which is being constructed by NorthSky Communications. This will include expansion of the network to all businesses within Sandy’s business district.

The delay in getting MDU’s deployed has been somewhat part of a blessing. It has allowed me to get caught up on other projects, rather than hurrying trying to get Calix Activate incorporated. However progress has been made. The FMS CPE provisioning has changed yet again. Any Compass connected device can how allow for one click provisioning. This is a solution the the 844-E’s. I have successfully integrated CC+ subscriber creation, device creation/assignment and modification through one interface.

I’ll make a whole post dedicated to CC+ integration in the future (heh).

So I wrote the above information towards the end of January, and since then I have been finishing up an overdue project on the plant management module. It’s now done, but I would rather just push this post out and write a new one dedicated to it.

Earlier this year, I made a post regarding a new module called theField Tech Console. If you haven’t read it, I’ll give a quick breakdown of what it does. I’ve updated a few functions, so this is the latest rundown of its abilities:

Tracks location and time of installer while they process a calendar event

Allows provisioning of CXNK related ONT’s through the Calix Management System and Compass

Allow management to manage by exception and view installer performance without interference

Process different types of jobs

Fiber installs

Customer appointments

Custom events

Access work orders module

Lets break these down a bit more, the tracking is a function that has existed since the beginning of the field tech console. Depending on the requirements of the calendar event, the technician is required to update their status in order to complete a job. Different jobs have different statuses. Fiber installs consist of four required updates before a job can be completed. Enroute, arrival, installing and clean up. Customer appointments have only three.

Enroute, arrival and clean up. When a tech leaves for their job, they update the status to enroute. Upon arrival, the status is updated. If the tech is installing fiber, once they start installing the line or ONT, they update the status to installing. Once they are done, they update the status to cleaning up. Each status record records the time and approximate location of the job. This builds a timeline of how long it took to complete each operation. Management can view this information live though the FMS, and can further piece together jobs in a given day to see how much time the tech was idle or being utilized.

For example, a contractor has three fiber installs to perform in a given day. Their time card reflects seven hours of work, meaning each job takes 2.33 hours. If the installer is updating their statues correctly there should be a nice distribution between finished times and enroute updates to the next job. If each job only takes one hour to complete, some time stamps would reflect abnormal behavior. It could be time and distance between jobs, time between status updates, etc. It is not fool proof, but it makes it hard to fake consistently over time if you have historical data to compare to. One such flaw is faking the amount of time it takes between installing and cleaning up. If an installer finishes the job, and sits in their truck for 45 minutes before updating the job to cleaning up or finished, how does management ensure that the data is accurate? These kinds of issues are fairly difficult to track and eliminate. Using other data, this kind of issue can almost be removed from the equation. In Calix based systems, the access platform will send out syslog messages when ONT’s arrive or depart from the network. If that time is compared to when the status was updated, a manager can know that if a job took 45 minutes after the ONT came online, abnormal behavior has occurred. It’s these kinds of data metrics that can allow managers to easily track performance of contractors, and take corrective action when needed.

Now, that explanation took a bit longer than expected, but hopefully you get the idea of how this kind of data is useful. Lets discuss the latest improvement in the field tech console. Integration of Calix’s Compass suite with ONT provisioning through the FMS. With the new addition of Activate in the Calix Compass suite, it required that I start integrating our FMS software more with their cloud based system, rather than just CMS. The transition to using both systems has proven to be a little bit clunky for now, and adds anywhere between five to 20 seconds of provision time when creating records within the FMS. But, with our size, and the amount of jobs being processed, 20 seconds equates to very little in terms of money. I’ll have to make another post talking about the new CC+ integration, but for now, I’ll just link to the public repo.

When a tech has completed provisioning the ONT, they set the status of the job to Cleaning Up. This update removes the ONT provision wizard from the screen, and prompts the tech to provision the WiFi. The tech has the ability to modify an existing provisioning record, or create a new one (only one record can be stored).

These changes occur instantly if the ONT is online and operational. If the ONT is still provisioning, once it performs its check-in with Consumer Connect, it will download the provisioning record. The idea of this is to allow an installer to provision, and configure an ONT for a customer without pulling out a laptop, or connecting any wires. In addition, management can gather valuable statistics on installs. The following flowchart depicts the actual sequence and parties involved in one job or cycle.

This idea will go live in two days on 10/24/2016. A goal of this implementation is that it will generate one less call to support to get WiFi configured. Being an organization that prides itself on adding more value to the customer, a one stop, quick and accurate experience has shown to increase customer retention rate. This project has created two major milestones. It has gotten me to update the field tech console, fixing many of the bugs that existed previously, and it has finally gotten me to write the CC+ API. The PHP Class is publicly accessible on my Github page, and the like to repo is above.

In the future, once development dies down a little, I am hoping to focus on increasing the effectiveness of the processes and sequences I have created. Utilizing what I am learning on operation management classes will hopefully allow me to benchmark, optimize and replace existing flows in an attempt to reduce overhead and cut out waste.

I figured it was time to update this blog a bit, give a quick update on changes I have been making within the FMS. I haven’t died, or quit, I have just been quiet. Since my last post, a few things have happened.

Calix released E3 G.fast units. This is cool, but has been difficult to get around to. The FMS was built to work with CMS and its NBI. G.fast can hang under an E7 (or at least in a later release), but do not use CMS in any way shape or form. Calix has created Activate, a cloud based solution and SDN. I could rant about this all day, but I am not too impressed with the whole idea. I don’t like being at the mercy of the cloud for my operations. In addition, Activate is not even ready to be used. We have been waiting for over a month from Calix to get access to the program. Finally, our management VLAN is severed from the internet, making it impossible to currently implement without a VPN. I guess we’ll just keep making exceptions until our security is nonexistent.

In addition to having to rewrite most of my code to work around Consumer Connect, I have to rewrite the following modules:

Customer management

Inventory

ONT Provisioning

Field Tech Console

Alerting

I’ll have to keep ya’ll updated as finish these. I have been working on each one, but none are currently completed.

Major overhaul of the Plant Management module. This will deserve its own post, and it will be a LONG one. Our plant management is almost out of its infant stage, and will soon have good enough core or baseline, that a lot of cool things could be attached to it. Currently the plant management handles

Objects

Sites

Regions

Terminals

Splitters

Splice Cases

Segments

Fiber Cables

Distributive and traditional fiber network layouts

Fiber cable and splicing management

Generation of splice diagrams

Optical cable and route tracing

I assure you, there is a lot more that it could be doing, and it could be doing its current operations much better. None the less, I will be continuing to work on.

Locate module and Work Orders module. These two modules are smaller, but none the less important. Locate record management and tracking makes it easy to plot out where and when areas need to be located. I have been in discussions with our Public Works department to assist them in locating and time management. Work orders are another small, cross module, module that allows our utility technicians to determine what needs to be finished. It is accessible through our field tech console, and has the ability for contractors to have limited access into our system. Paper work orders are no longer needed, and these are auto generated, and virtually filed. These can be assigned to segments and drops currently. Techs can simply pull up their work order list, go out and fix the issue, update it in the field, and be monitored by appropriate individuals. Finally, inspectors have the ability to analyze the record and approve or reject the work order completion. Notes and various attributes record what actions were taken and resources were used.

So, overall I have been busy. I am hoping to start wrapping up some of these modules in the near future. It currently seems like a never ending job. At least for one person :/

On the bright side, I’ll be back in Corvallis soon enough for my final year at OSU.

It has been a few weeks since I wrote a post regarding the FMS. I have actually been busy writing a new module for outside plant management. I have also been chugging away at my classes, anticipating being out for the summer.

Finally, I have been thinking a lot about the future of the FMS. According to classic economic theory, I would be not be writing this system if my wages outweighed the value of the system. It would be irrational. Maybe it is… Anyway, it is beyond my pay grade. I spend a vast amount of time thinking and attempting to write my code in a modular, planned out fashion. I have to satisfy our needs as a company, but also make it nimble enough to translate to another company that operates differently.

Let me slow down, back up, and explain.

I want to sell this thing to other companies. I will explain why later.

Lets resume with the thought fresh in our minds that SandyNet is not an ordinary ISP. For one we are municipally owned. Two, we are not telecom guys by any standard. We studied for our CCNA’s. We ran a WISP for seven or so years. We don’t know copper, telephone or TV. For this reason, the FMS is built differently than a standard solution would. Our scenario is different. We are a municipality with sysadmins and network engineers, running a fiber network with equipment made for telecom companies. We see things differently and we operate differently.

Our company mission and strategy is way different than the competition. Our focuses are on the customer, not so much the service. This is why the FMS system revolves around the customer so much. I had the pleasure of chatting with Erwin Utilities a few weeks ago, and they mentioned that the way we structured our FMS is different. Part of this may have been on accident. The FMS’s predecessor was the SandyNet Intranet, which was a customer management system. I took that system and grew it. The core is still the customer, and I am fine with that (if you want to get really technical, the core is based on tax lot now).

Friday, May 27th, I spoke with our regional sales engineer from Calix. I showed him our system, and he seemed impressed with what we were capable of doing (Keep in mind, our FMS does a lot more than shown on this blog). We discussed a lot of integration and possible uses of the system. It was said that our FMS may work well in both municipalities and the private sector. Small, or maybe medium size deployments. What was most important is how niche of a market it is tailored for.

Our FMS was build around a greenfield deployment. This meant we got to write the rules of our operation, not stay in compliance with some old standards from the stone ages. I believe in addition to our so called non standard operation and management techniques, our custom (currently) proprietary software has given us a competitive advantage against the local cable and phone companies.

We do not dictate the market. We react to the needs of consumers, and give them the service they want and deserve. Phone and cable companies should not have the power to influence or constrain the operations and sometimes profitability of a company. That’s not a utility. Internet is a necessity in some businesses and households now. If access, speed and reliability of the internet is required for our daily lives, then bad ISP’s limit innovation of companies, and unrestricted access to individuals. I feel that new ISP’s and the implementation of greenfield deployments have the ability to change the old way that ISP’s operate. If that is the case, then software can help achieve that goal. I want that to be the purpose of the FMS. How can you add more value to the customer, without charging them more? How can you increase the reliability and usability of your service, without leveraging additional staff? We are seeing this now, as we don’t have to hire more workers to assist in the increase in workload from operations. Our software takes care of the day to day book keeping and busy work.

When you put the situation in the context I have explained before, who wouldn’t want a unified system to manage their operations? What company would want to hire additional staff, or be inefficient? No company would want to not have all their data in a central location. SandyNet noticed when data was easily accessible and automatic, trends and predictions can be found. First time resolution on calls increased dramatically. Record keeping errors and operator mistakes diminished. Finally, correlations between data were noticed. This data acts as a feedback loop in the previously described outcomes. In addition, it opens up new opportunities to add more value to the customer. All of this occurs with not extra cost to us. Well besides my wages. 🙂 I hope the FMS has a future, this is the biggest project I have every attempted, and it would be a shame if others could not reap the benefits it has to offer.

Also, I got a name drop in a utility blog a little while back. Happy day.

Well, as promised from my previous post, I am here to unveil the new module that will CHANGE THE WAY WE INSTALL FIBER CUSTOMERS (not really, but I like to think so).

So naturally, after a fiber project is completely built out and operational, the time and effort needed to build a system is finally found and put to good use. During our initial build out, we hired contractors to assist in the deployment of ONT’s for our new fiber customers. Being contractors, they would show up late for work, leave early, put in poor effort and often just caused a lot of issues. This issue resolved itself once we discontinued our relationship with the contracting company, and took some of their employees (the good ones). So given that we no longer have an issue with getting ONT’s installed in people’s houses, this module has lost some of it luster. But hey, I am a college student getting paid to write code right now, so I gotta find something productive to do!

In reality, this module will actually solve some other issues that we currently are facing, but able to manage.

Purpose

The field tech console is web interface that allows technicians, installers and contractors, manage and track their appointments. In the past, we have used Google Calendar to push out appointments to contractors and installers. While this has worked, see my calendar post for information as to why I chose to leave Google and move my calendar system in house.

So just looking at the screenshot, it is easy to see now what this page does. Appointments are stacked, and the appointment details are provided to techs.

So, nothing really fancy, Google calendar does the same thing, only looks fancier. Not true!

The ability for calendar events to tie into customer records is helpful, since it can allow for appointments to be assigned to customers or tax lots for later reference. For example, I can look at a customer record and see who has visited them in the past.

Not only that, but these events have statuses. Depending on how the repair/install/pizza run went, it is recorded if it was successful, incomplete, or the customer failed to show up.

Location tracking

Now this feature is probably the most ideal for managers, and least ideal for really bad contractors that lie on their time cards. There are stages built within an event that tracks the progress.

Allowing for updated statues during an event utilizes the device and it GPS. Coordinates of the device are recorded under the record, along with the time stamp. From the last post, the result is a map on the final report for the event.

So now we can know when and where a status was updated, and how much time it took between each task. Now, one issue we thought of was that the numbers would just be fudged. While true, if a manager were closely watching each event, it would be apparent when times did not match up, or locations looked off. Apart from that, time stamps on when ONT’s arrived and departed are recorded, which can be compared to the time that it was provisioned.

ONT Provisioning

One of the coolest features that the field tech console allow for, is real time provisioning of ONT’s Not to get too much into detail, but if you read my previous post on the Calix NBI, you know that I used to provision units using the FSAN on each ONT. That FSAN is assigned to the customer and used for numerous things. One big issue that we have just dealt with in the past, was that ONT’s are assigned to customers now, instead of using a registration id. This means the ONT’s are provisioned in the morning, and handed to the installers. This resulted in the wrong ONT getting assigned to the wrong customer sometimes. We no longer really have this issue, but we can use this new feature to our advantage.

Usually on the day of an installation, a customer record is created, and a configuration for the ONT is made. This prevents techs from providing the wrong service package to the customer, or messing up other options.

ONT’s are assigned from inventory to installers. Those installers can then take batches of ONT’s to their vehicles and not requiring them to check into the office every morning. Since the FMS tracks ONT’s, it presents a list of assigned ONT’s to an installer.

Then, once an appointment has been set to the proper status, the ONT can be provisioned!

Obviously this is where the happy green success message would show up, if the ONT wasn’t already provisioned elsewhere. But you can see how an installer has more autonomy to do their job now, and we can actually collect more information than before! This module is going live this Friday, and we will begin to see first hand how well it works. Of course there is more to this module, but I have shown you a good portion of it. There is room in the future for expansion and addition of new features.

For a while now, I have been debating moving the calendar integration within our FMS. Currently we tie into the Google Calendar API via the Auth 3.0 Javascript API. While this works, we miss some of the features that Google refuses to add. I began the search for an in house solution to our problems.

I had a couple of requirements and features that were ideal within a calendar solution. It has to have a non sucky API, be fast, deliver in multiple formats (XML,JSON, HTML), and had to be open source. Currently my employer uses Google Apps, which I have utilized heavily during the initial development of the FMS. While unlikely to ever die soon, Google has been known for killing off programs quickly, leaving users frantically looking for a new solution (*cough* reader). Now, given my current age, I never had to experience lotus notes, exchange or other forms of email. Once I got into the game, gmail was making waves, and I hopped on the train and never looked back. This left me severely limited on knowledge of calendar systems, what other possibilities existed out there. I began to look for a complete overhauled solution.

Attempt 1

My first attempt included a Zimbra based system, which has a dead simple API. Simply add a username and password with proper permissions to an account within an http header, and it returns HTML, JSON, XML or iCAL formats. To push an event or import, is done via iCAL format. I began to entertain this option, and built a front-end to create and view events. Once I had a quick and dirty example, I presented it to my boss. It didn’t pass, and I had to begin brainstorming other ideas.

Attempt 2

After scratching my head and putting the project on the back burner for a couple of weeks, I finally had to sit down with my boss and discuss options. It was obvious that there was no perfect solution. By giving up Google calendar, we would loose its nice interface, and tie-ins to accounts and mobile devices. On the other hand, doing bi-directional updates between our systems to google calendar was not ideal. With the same requirements as before, I proposed a PHP/MYSQL driven solution. This would separate our installers/contractors from needing Google accounts, and allow for many custom options, such as tax lot association and customer account links. This option took a bit of convincing, but finally got approved, and I began development.

I will attempt to not bore you with too many technical flows, but I utilized Serhioromano’s Bootstrap Calendar to fit within our FMS. I had a few minor tweaks, and bugs, but I soon got a decent enough system working. Here is an example of the script used to pull calendar events from the database, and format it to work with the JS calendar.

There were also three main types of events, which used a common tag to represent what kind of event it was.

fiber-install – Obviously a fiber install. The reason this one is different will be explained later

install – Different from an install. Could be a WiFi or wireless one

Later this week, we will be going live with this calendar. Some other features include editing an event, up until the event has been closed by a technician, and recording of additional data that can be used for metrics.

This is the default view. The events stack better than when I had it tied into Google Calendar, and it is much, much faster at loading.

This is the main edit event page. As you can see, the event is closed, and only reports can be generated now.

Here is a completed event that I captured while on campus. Don’t mind the duplicate entry for “Cleaning up”. I fixed that bug :).

So that is all fine and dandy, but why stop there? By looking at the report, you’ve got to be asking yourself, wait, how are there geolocated time stamps?!? Ah, let me explain. The reason we were moving to this new calendar system was also in part to a new module, called the Field Tech Console. This console will allow technicians to receive jobs, and update their event as time goes on. This brings on numerous events, which I will explain in the next post. For now, assume it is GOING TO CHANGE WE INSTALL FIBER. Not really, but I like to think so…

So there you have it. My new calendar module. By moving our calendar in house, it will allow us to tie in our planned operations with records and pretty much everything else. It may not be as fancy as Google Calendar, or even as portable, but the trade off is some prettiness for a much more efficient record managing system.

Before I conclude this post, I want to showoff the add event page. There are actually three different types of event add pages. One for Pending Requests (a module for people requesting service), current customers (repairs and stuff) and general events (main line conduit fix or pick up pizza).

On the 8th, Calix flew out three people to Sandy to interview employees and customers about our fiber network. I finished my organizational behavior presentation by 10am and arrived in Sandy at 12. I was interviewed for 30 minutes about how I have utilized the northbound interfaces on different Calix software’s. No more than two weeks later, Calix released the video online, so I have embedded it here. Granted, I only appear for 30 seconds and ramble on about our Fiber Management System, but I am hoping that with other recent events I will be able to move forward a bit more, link my name OSS fiber management solutions.