Post navigation

If you’re looking for a powerful system that allows you to easily change in-world surface images/textures, make live modifications and add customized web-links within your Unity-based Jibe multiuser virtual world, then you might be interested in a new product from ReactionGrid.

Watch the above video for a complete walkthrough. Here are some screenshots from it.

Everything looks better with Wiener Dogs.

This beautiful 3d store model is made up of many high quality 2d surface images/textures, and I could modify any of them with ACES.

Use the in-world menu system to change surface textures and weblinks.

I just changed the junk food aisle into one containing healthy fruit using ACES!

What is ACES?

ACES is a system that adds user­-modifiable display boards to a Jibe world. Any image on the web can be projected onto any display board, and boards can be configured to open any URL in a new browser window.

The content in an ACES display board can be changed in-situ (within the live published Jibe world) with no work required in the Unity editor.

All ACES display boards can be changed in­world by any user who has logged in with an account that has administrative level access.

ACES display boards can also be configured so that any user can “claim” an unused board as their own and then have full admin control over it.

ACES display boards don’t have to look like boards! Imagine changing the image textures of storefronts, buildings, any surface at all in your Jibe world.

Innovative educators are constantly facing the challenge of matching pedagogical goals with complementary technological tools. Unfortunately, given the wide range of technologies and devices that vie for consumer attention, the right choices are not always clear. In this presentation, John Lester will describe how focusing on the way the human mind interacts with the world and other human beings can help identify the right tools for the right jobs. From a mind-augmentation perspective combining constructivist and behaviorist approaches, John will explore the “Art of Enhanced Reality” through tools for knowledge management, self-animated artificial life living in augmented reality, and the unique affordances of perceptually immersive multiuser 3d virtual worlds for collaborative learning.

In this hands-on workshop, I’ll be demonstrating exactly how to export your own user-created objects (both prim and mesh based) and move them between Second Life, Opensim and Unity. Attendees will watch my desktop via a live TeamViewer screenshare and follow along on their own using freely-available software.

The crux of my workshop will be a live demonstration of me creating something in both Opensim and Second Life and then walking through exactly how to get it into a scene in Unity. I’ll also be demoing how to move content between Second Life and Opensim. If you’re worried all this might be overly complicated, I promise it will be a lot easier than you expect. Plus you’ll have the fun and “excitement” of watching me do all this live on my own desktop (what could possibly go wrong?). The key takeaway will be that the whole process is easy enough for anyone to learn how to do, regardless of your level of technical expertise.

If you can’t watch it live, no worries. My session will be recorded so you’ll be able to watch it later. I’ll update this blog post with a link to the recording once it’s online.

There are many educational games out there that do their best to teach people about the environment. And many of them do a great job.

For example, I really like how Earth Day Canada put together their EcoKids website. The games on EcoKids are mostly simple simulations with engaging action and puzzle-based mechanics, and it’s great how they blend the computer-based games with physical-world activities (e.g., play a game on the computer then go outside and do some recycling). Games that encourage people to make positive changes to their physical world, improving the environment for everyone.

But there’s another level of immersive environmental education we haven’t even touched yet.

First, a key fact about Nature that we often forget.

Nature likes to hide things.Particularly when something is wrong.

It’s a fundamental trait that has developed in pretty much every species on the planet. Are you sick? Weak? Injured? Well, you better hide it as much as possible, otherwise something will come along, notice you’re indisposed, and then eat you for lunch. This trait also manifests itself in entire networks of interdependent and related organisms (i.e., ecosystems). By the time it’s easy to observe a systemic problem, the damage is often irreversible.

So, it’s not enough for us to be well educated and observant. We need superhuman powers to help us visualize what’s really happening in Nature.

I believe artificial life combined with augmented reality is the magic key. We can help Nature tell us her secrets by creating artificial life forms directly connected to all the data repositories we’ve already created for collecting and tracking environmental data. Imagine the appearance and behaviors of these artificial life forms changing based on these data, generating powerful human-observable moments. And finally, imagine these artificial life forms living in an augmented reality space overlaid on the natural world.

Kodama are small mystical creatures living in the forest that represent the spirits of all the trees. Their behavior and appearance in the movie is directly related to the health of all the trees they inhabit. For example, when the trees get sick, the kodama can be seen falling from the air and dissolving into the ground.

Now, imagine walking up to a tree in the physical world.

Is that tree really healthy? Not sure, since trees (like most life forms) are pretty good at hiding things (until it’s too late). Is the forest in which this tree lives getting enough water? Is the water table polluted?

Sure, you could pull environmental data up on your smartphone and look at graphs and charts and summarized reports.

But those are all cold data, with no sense of life to them.

Rather, imagine watching the data express itself through a family of Kodama that live around the tree. Imagine looking through your smart phone into an augmented reality space full of artificial life with which you can interact and communicate.

Oh no, all the Kodama are brown and withered! That means drought! Oh, they’re all walking over to that other tree. There must be water over there. Wait, they’re mutating into something weird. Some kind of pollution? The imaginative possibilities, let alone the entertaining and engaging gaming scenarios, are endless.

The first thing you’ll notice about Wiglets is that…well, let’s be honest, they look kind of strange.

That’s because Wiglets are not hand-drawn cookie-cutter characters.

They’re artificial life forms that look and behave a certain way because of their genetics.

They’re neither homogenized nor pasteurized.

And unlike video games where you have limited character creator options (Choose from these 5 hairstyles! How about these 7 different noses?), a Wiglet’s DNA can recombine through breeding to create a highly unpredictable range of physical and behavioral diversity.

Kind of like real life.

Which is exactly the point.

In the near future you’ll be hearing more from us about a Wiglet breeder app that will let you experiment with just how diverse they can be.

We think you’ll fall in love with their quirky appearance and behaviors.

The 1st International Conference on e-Learning, e-Education and Online Training is being held September 18-20 in Bethesda, Maryland. This conference will assess a wide range of progressive ideas for the future of e-Learning, focusing on the idea of technology as a means to education rather than an end in itself. The conference organizers have lined up a wonderful range of interdisciplinary speakers and are planning to attract a wide group of heterogeneous scholars and practitioners.

“Augmented Mind: The Evolution of Learning Tools
from Language to Immersive Reality”

Innovative educators are constantly facing the challenge of matching pedagogical goals with complementary technological tools. Unfortunately, given the wide range of technologies and devices that vie for consumer attention, the right choices are not always clear and are typically obscured by media hype. In this presentation, John Lester will describe how focusing on the way the human mind interacts with the world and other human beings can help identify the right tools for the right jobs. From a mind-augmentation perspective combining constructivist and behaviorist approaches, John will explore web based tools ideal for knowledge management, augmented reality based self-animated autonomous agents, and finally the unique (and sometimes over-hyped) affordances of perceptually immersive multiuser 3d virtual worlds for collaborative learning.

My goal will be to tell an interesting story with examples and demos of technologies that I think really leverage how our minds naturally embrace the world around us. One such technology that I’m currently exploring and that you’ve probably never heard of are Wiglets.

Wiglets are autonomous, evolving, self-animated and self-motivatedagents that can exist in both completely virtual and augmented reality environments. They exist at a wildly creative intersection of artificial life, art and gaming. And perhaps best of all, you can interact with them directly through touch and gestures.

Another topic of discussion will be the affordances of multiuser 3d virtual worlds, especially how one can reduce the barrier to entry for people interested in leveraging them for educational purposes. ReactionGrid has recently developed some new tools that integrate with the Unity3d-based Jibe platform to provide on-the-fly content editing in a simple yet powerful way. I’ll be giving a sneak preview during my presentation.

Want to easily change this web-based 3d environment on the fly without having to muck around in Unity? Now you can. I’ve got some new tricks with Jibe to show you.

Between 2010 and 2012 I had a lot of fun organizing and running the Hypergrid Adventurers Club (HGAC) meetings. Those were the very early days of Hypergrid connectivity in Opensim, and things often went awry during our explorations. We all stuck together and helped each other out through crazy technical challenges and exciting adventures, but it was the community of helpful people at these meetings that impressed and amazed me the most.

Time moved on, and over the past couple years I stopped organizing Hypergrid Adventurers Club meetings. Not because of any lack of interest in Opensim and the Hypergrid on my part, mind you. I’m still very excited about the future of Opensim and the Hypergrid, and I continue to explore and experiment a lot on my own. It was just that I felt the HGAC had run its course. Opensim was becoming much more stable and easier to use, hypergrid jumps were becoming very reliable, and directories of great places to explore were expanding (see Hyperica and iDreamsNet). Also, attendance was gradually declining, and other aspects of my life were getting busier, so I figured it was time to wind things down.

But when one flower closes, a new and different one usually blooms. That’s the beauty of online communities. They adapt and change and grow.

The Hypergrid Safari is a new group that runs weekly tours across Opensim. In their own words:

“Want to discover open sim and learn to hypergrid? Join our friendly weekly trips to destinations all over the hyperverse, and get help with shopping for your avatar, free land opportunities, and sympathy when you run up against snags and bugs.”

I’ve been attending these trips and they’re simply fantastic. The organizers are Thirza Ember, Fuschia Nightfire, Wizard Gynoid and Wizardoz Chrome. I think all of them bring beautiful new perspectives to exploring the Hypergrid, in particular the perspectives of skilled content creators and innovative artists who have a long history of pioneering work in Second Life and other virtual worlds. And everyone attending brings their own thoughtfulness and great sense of humor to the group. Once again, it’s the community of people that impresses and amazes me the most in online worlds.

So please check out the Hypergrid Safari and go on one of their tours. You can get connected with them and learn more in many different ways:

On Monday April 7 at 6pm PDT I’ll be giving a Virtual Worlds Lecture in Second Life.

The title of my talk is “Finding the Balance between Pedagogy and Technology.” Here’s a summary:

One must always seek a thoughtful match between pedagogy and technology. Different virtual world platforms are suited for different uses, ranging from collaborative work environments to immersive goal-oriented simulations. The speaker will discuss current virtual world technological trends involving specific gaming technologies like Unity3D and the growth of Open Source platforms such as OpenSimulator. A discussion will focus on helping educators choose the right tool for the right job, matching pedagogical goals with technological affordances.

Part of what I’ll be doing in addition to showing slides and speaking will be a live demo of some of the content import/export tools in the Singularity Viewer. You’ll get so see how you can easily backup content you’ve created in Second Life or Opensim to your hard drive and how to get that content into other 3D platforms like Unity3d and Blender.

I’ll be attending these two upcoming conferences. If you’re planning to attend either of them or if you just happen to be in town when they occur, please contact me via my about.me page if you’d like to meet up and chat about learning in virtual worlds!

The main aims of this conference are to increase our understanding of experiential learning in virtual worlds, both formal and informal, to share experiences and best practices, and to debate future possibilities for learning in virtual worlds. For full details, please see the conference website.

My panel presentation will be “Finding the Balance between Pedagogy and Technology.” Here’s my abstract:

Next Generation virtual worlds will be tightly coupled to many other emerging technologies, leveraging modern knowledge management processes and providing platforms for broad use among teachers and learners. As the technological landscape grows, it is becoming increasingly difficult for educators to identify the right platform (or mix of platforms) for their specific immersive learning needs.

In my current position at ReactionGrid and my previous work at Linden Lab and Harvard Medical School, I have explored the use of a wide range of gaming and virtual world platforms to augment education. Today there are a number of very interesting virtual world technological trends involving specific gaming technologies like Unity as well as the growth of Open Source platforms such as OpenSimulator. My ongoing work involves finding the right match between educational goals and technological affordances as well as identifying key synergies when virtual world technologies are interwoven with existing social media and web-based educational content.

Above all else, there must be a thoughtful match between pedagogy and technology. Different virtual world platforms are suited for different uses, ranging from collaborative work environments to immersive goal-oriented simulations. One of the most important and challenging goals for any educator exploring virtual worlds is simply finding the right tool for the right job. Likewise, it is critical for virtual world platform developers to keep a firm focus on well established knowledge management principles when designing new technologies intended to advance the field of immersive learning.

I’m particularly thrilled about this panel because I’ll be participating with Dr. Bryan Carter from the University of Arizona. Bryan is a true pioneer in using virtual worlds for experiential learning, and he’s been working with virtual environments since his dissertation project in 1997 when he created a virtual simulation of Harlem, NY as it existed during the 1920s Jazz Age and Harlem Renaissance. Virtual Harlem was one of the earliest full virtual reality environments created for use in the humanities and certainly one of the first for use in an African American literature course. The project continues to grow and evolve as Bryan explores new virtual world platforms.

This new conference will assess a wide range of progressive ideas for the future of e-Learning, focusing on the idea of technology as a means to education rather than an end in itself. The conference organizers are lining up a wonderful range of interdisciplinary speakers and are planning to attract a wide group of heterogeneous scholars and practitioners. For full details, please see the conference website.

About Me

John Lester is an expert in Multiuser 3D Virtual Worlds, Immersive Learning, Knowledge Management and Community Development. His background is in neuroscience research and medical education, and he previously worked at Harvard Medical School, Massachusetts General Hospital and Linden Lab.
John is currently Chief Learning Officer at ReactionGrid Inc., helping clients develop new systems for immersive learning using web and mobile-based virtual world platforms. He's also Community Developer and Creative Advisor at Wiggle Planet, helping create free range, self-animated beings at the intersection of virtual and real.
For more contact info please see http://about.me/pathfinder