In microeconomics and management, going vertical or vertical integration occurs when the supply chain of a company is owned by that company. For example, if a car manufacturer also produces its own steel, tires and batteries.

This is in contrast with horizontal integration, wherein a company produces several items which are related to one another.

Higher education has been a vertical enterprise for centuries. We keep knowledge creation, teaching, testing, and credentialing all under one company/college banner.

These are terms from economics and business. Are they applicable to discussions about education?

Horizontal integration often occurs in the business world by internal expansion, acquisition or merger. Of course, that might happen in education too, but there are also signs that it is happening in other ways.

In more general terms, assessment alignment is often the reason for both horizontal and vertical alignment in education. Alignment is typically understood as the agreement between a set of content standards and an assessment used to measure those standards. By establishing content standards, stakeholders in an education system determine what students are expected to know and be able to do at each grade level.

Probably, it is best when education goes both vertically and horizontally.

Horizontal information exchange can be teachers sharing methodology, students sharing information, students helping each other learn.

When a curriculum is truly vertically aligned or vertically coherent, what students learn in one lesson, course, or grade level prepares them for the next lesson, course, or grade level. I know teaching is supposed to be structured and logically sequenced so that learning progressively prepares them for more challenging, higher-level work. I saw that structured sequencing more in my K-12 teaching than I do in higher education which is more siloed.

I first encountered a chatterbot, it was ELIZA on the Tandy/Radio Shack computers that were in the first computer lab in the junior high school where I taught in the 1970s.

ELIZA is an early natural language processing program that came into being in the mid-1960s at the MIT Artificial Intelligence Laboratory. The original was by Joseph Weizenbaum, but there are many variations on it.

This was very early artificial intelligence. ELIZA is still out there, but I have seen a little spike in interest because she was featured in an episode of the TV show Young Sheldon. The episode, "A Computer, a Plastic Pony, and a Case of Beer," may still be available at www.cbs.com. Sheldon and his family become quite enamored by ELIZA, though the precocious Sheldon quickly realizes it is a very limited program.

ELIZA was created to demonstrate how superficial human to computer communications was at that time, but that didn't mean that when it was put on personal computers, humans didn't find it engaging. Sure, kids had fun trying to trick it or cursing at it, but after awhile you gave up when it started repeating responses.

The program in all the various forms I have seen it still uses pattern matching and substitution methodology. She (as people often personified ELIZA), gives canned responses based on a keyword you input. If you say "Hello," she has a ready response. If you say "friend," she has several ways to respond depending on what other words you used. Early users felt they were talking to "someone" who understood their input.

ELIZA was one of the first chatterbots (later clipped to chatbot) and a sample for the Turing Test. That test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human, is not one ELIZA can pass by today's standards. ELIZA fails very quickly if you ask her a few complex questions.

The program is limited by the scripts that are in the code. The more responses you gave her, the more variety there will be in her answers and responses. ELIZA was originally written in MAD-Slip, but modern ones are often in JavaScript or other languages. Many variations on the original scripts were made as amateur coders played around with the fairly simple code.

One variation was called DOCTOR and was made to be a crude Rogerian psychotherapist who likes to "reflect" on your questions by turning the questions back at the patient. This was the version that my students when I taught middle school found fascinating and my little programming club decided to hack the code and make their own versions.

Event-based Internet is going to be something you will hear more about this year. Though I had heard the term used, the first real application of it that I experienced was a game. But don't think this is all about fun and games. Look online and you will find examples of event-based Internet biosurveillance and event-based Internet robot teleoperation systems and other very sophisticated uses, especially connected to the Internet of Things (IoT).

What did more than a million people do this past Sunday night at 9pm ET? They tuned in on their mobile devices to HQ Trivia, a game show, on their phones.

The HQ app has had early issues in scaling to the big numbers with game delays, video lag and times when the game just had to be rebooted. But it already has at least one imitator called "The Q" which looks almost identical in design, and imitation is supposed to be a form of flattery.

This 12-question trivia quiz has money prizes. Usually, the prize is $2000, but sometimes it jumps to $10 or $20K. But since there are multiple survivors of the 12 questions that win, the prizes are often less than $25 each.

Still, I see the show's potential (Is it actually a "show?") Business model? Sponsors, commercial breaks, sponsors and product placement in the questions, answers and banter in-between questions.

The bigger trend here is that this is a return to TV "appointment viewing." Advertisers like that and it only really occurs these days with sports, some news and award shows. (HQ pulled in its first audience of more than a million Sunday during the Golden Globe Awards, so...)

And is there some education connection in all this? Event-based Internet, like its TV equivalent, is engaging. Could it bring back "The Disconnected" learner?

Event-based distributed systems are being used in areas such as enterprise management, information dissemination, finance,
environmental monitoring and geo-spatial systems.

Education has been "event-based" for hundreds of years. But learners have been time-shifting learning via distance education and especially via online learning for only a few decades. Event-based learning sounds a bit like hybrid or blended learning. But one difference is that learners are probably not going to tune in and be engaged with just a live lecture. Will it take a real event and maybe even gamification to get live learning?

In all my years teaching online, I have never been able to have all of a course's student attend a "live" session either because of time zone differences, work schedules or perhaps content that just wasn't compelling enough.

I learned about edge computing a few years ago. It is a method of getting the most from data in a computing system by performing the data processing at the "edge" of the network. The edge is near the source of the data, not at a distance. By doing this, you reduce the communications bandwidth needed between sensors and a central datacenter. The analytics and knowledge generation are right at or near the source of the data.

The cloud, laptops, smartphones, tablets and sensors may be new things but the idea of decentralizing data processing is not. Remember the days of the mainframe computer?

The mainframe is/was a centralized approach to computing. All computing resources are at one location. That approach made sense once upon a time when computing resources were very expensive - and big. The first mainframe in 1943 weighed five tons and was 51 feet long. Mainframes allowed for centralized administration and optimized data storage on disc.

Access to the mainframe came via "dumb" terminals or thin clients that had no processing power. These terminals couldn't do any data processing, so all the data went to, was stored in, and was crunched at the centralized mainframe.

Much has changed. Yes, a mainframe approach is still used by businesses like credit card companies and airlines to send and display data via fairly dumb terminals. And it is costly. And slower. And when the centralized system goes down, all the clients go down. You have probably been in some location that couldn't process your order or or access your data because "our computers are down."

It turned out that you could even save money by setting up a decentralized, or “distributed,” client-server network. Processing is distributed between servers that provide a service and clients that request it. The client-server model needed PCs that could process data and perform calculations on their own in order to have applications to be decentralized.

Google Co-Founder Sergey Brin shows U.S. Secretary of State John Kerry the computers inside one of
Google's self-driving cars - a data center on wheels. June 23, 2016. [State Department photo/ Public Domain]

Add faster bandwidth and the cloud and a host of other technologies (wireless sensor networks, mobile data acquisition, mobile signature analysis, cooperative distributed peer-to-peer ad hoc networking and processing) and you can compute at the edge. Terms like local cloud/fog computing and grid/mesh computing, dew computing, mobile edge computing, cloudlets, distributed data storage and retrieval, autonomic self-healing networks, remote cloud services, augmented reality and more that I haven't encountered yet have all come into being.

Recently, I heard a podcast on "Smart Elevators & Self-Driving Cars Need More Computing Power" that got me thinking about the millions of objects (Internet of Things) connecting to the Internet now. Vehicles, elevators, hospital equipment, factory machines, appliances and a fast-growing list of things are making companies like Microsoft and GE put more computing resources at the edge of the network.

This is computer architecture for people not things. In 2017, there were about 8 billion devices connect to the net. It is expected that in 2020 that number will be 20 billion. Do you want the sensors in your car that are analyzing traffic and environmental data to be sending it to some centralized resource - or doing it in your car? Milliseconds matter in avoiding a crash. You need the processing to be done on the edge. Cars are "data centers on wheels."

Remember the early days of the space program? All the computing power was on Earth. You have no doubt heard the comparison that the iPhone in your pocket has hundreds or even thousands of times the computing power of the those early spacecraft. That was dangerous, but it was the only option. Now, much of the computing power is at the edge - even if the vehicle is also at the edge of our solar system. And things that are not as far off as outer space - like a remote oil pump - also need to compute at the edge rather than needing to connect at a distance to processing power.

There are so many posts I have the past few weeks about trend for education and technology, but one way of seeing what trends may emerge this year is by looking at the tracks, presentations and keynote speakers at EdTech conferences.

I'll be moderating a track next week at NJEdge.Net's Annual Conference: NJEdgeCon2018 "DIGITAL LEADERSHIP & ENTERPRISE TRANSFORMATION" January 11 & 12, 2018 in New Jersey. My track is, naturally, Education and Technology which has presentations on best practices, innovations and the effectiveness associated with current LMS and online learning tools, effective infrastructure, resources, sustainability models and integrated assessment tools.

But if you look at the other tracks offered, you can see that INFORMATION Technology outweighs instructional technology here. Other tracks at the conference are Big Data & Analytics, Networking & Data Security, Customer Support & Service Excellence, Aligning Business & Technology Strategies. and Transformation Products & Services.

Amber Mac (as in MacArthur) will talk about adaptation and the accelerating pace of corporate culture in the digital economy.

I have followed her career for a decade from her early tech TV and podcast venture to her current consulting business. She helps companies adapt to, anticipate, and capitalize on lightning-quick changes—from leadership to social media to the Internet of Things, from marketing to customer service to digital parenting and beyond. It’s not about innovation, she says; it’s about adaptation.

When it comes to teachers and technologies, the battle cry of Virginia Tech professor John Boyer is embrace, not replace. In his talk, he presents his view that the best teachers will embrace technologies that help them better communicate with students, but do not fear because those technologies will never replace human to human interaction. But blending the best communicators with the best technology has to offer will produce some amazing and unpredictable opportunities!

Wayne Brown, CEO and Founder of Center for Higher Ed CIO Studies (CHECS), will talk in his session on longitudinal higher education CIO research and the importance of technology leaders aligning technology innovations and initiatives with the needs of the higher education institution. His two-part survey methodology enables him to compare and contrast multiple perspectives about higher education technology leaders. The results provide essential information regarding the experiences and background an individual should possess to serve as a higher education CIO. In collaboration with NJEdge, Wayne will collect data from NJEdge higher education CIOs and will compare the national results with those of the NJ CIOs.

Timothy Renick (a man of many titles: Vice President for Enrollment Management and Student Success, Vice Provost, and Professor of Religious Studies at Georgia State University) is talking about "Using Data and Analytics to Eliminate Achievement Gaps." The student-centered and analytics-informed programs at GSU has raised graduation rates by 22% and closed all achievement gaps based on race, ethnicity, and income-level. It now awards more bachelor’s degrees to African Americans than any other college or university in the nation. Through a discussion of innovations ranging from chatbots and predictive analytics to meta-majors and completion grants, the session covers lessons learned from Georgia State’s transformation and outlines several practical and low-cost steps that campuses can take to improve outcomes for underserved students.

Greg Davies' topic is "The Power of Mobile Communications Strategies and Predictive Analytics for Student Success and Workforce Development." The technology that has been used to transform, to both good and bad ends, most other major industries can connect the valuable resources available on campus to the students who need them most with minimal human resources. Technology has been used to personalize the digital experience in such industries as banking, retail, information and media, and others by reaching consumers via mobile technology. Higher Education has, in some cases, been slow to adapt innovative and transformative technology. Yet, its power to transform the student engagement and success experience has been proven. With the help of thought leaders in industry and education, Greg discusses how the industry can help achieve the goal of ubiquity in the use of innovative student success technologies and predictive data analytics to enable unprecedented levels of student success and, as a consequence, workforce development.