VWBPE 2015: Ebbe Altberg – LL’s Next Generation platform

On Wednesday, March 18th, Ebbe Altberg gave the keynote presentation at the 8th annual Virtual Worlds Best Practice in Education (VWBPE) conference, which runs from March 18th through 21st inclusive, in both Second Life and OpenSimulator.

Ebbe Altberg: opening keynote at the 2015 Virtual worlds Best Practice in Education conference, March 18th

His key address lasted a little over an hour, in which he outlined the Lab’s approach to education and non-profits, provided some insight into what Lab’s future plans, and discussed further information on the Next Generation platform. Following this, he entered into a Q&A session, which ran beyond the main session time, switching from voice to text chat in the process.

The following is a transcript of his core comments on the Lab’s next generation platform. These commence at the 31:20 mark into the official video of the event, although obviously, mention is made of both in reference to education earlier on in his presentation, as Ebbe discusses education, issues of accessibility, etc. I’ve included audio excerpts here as well, for further ease of listening to his comments whilst reading. Time stamps to both the audio tracks and the video are supplied.

[00:00 / 31:20] So, the future platform for virtual experiences. We’ve said that the next generation platform, we still don’t have a name for this thing; we have a code name internally, but we don’t want to leak that out or use that, because that could just be confusing and distracting, and it’s probably going to change soon anyway. So we just refer to it as “the next generation platform”.

[00:21 / 31:42] We do notrefer to it as “SL 2.0”, because that might imply a little too much linearity, and we don’t want to necessarily constrain ourselves by the past; but we also want to obviously take advantage of, and leverage, our learnings from the past.

Progress to Date

[00:39 / 31:38] But the progress is going every well. I would say we’re about 8-9 months in on working on this; I would say the last six months have been absolute, full-on with a big crew. We’re talking close to 40 people or more; probably 30+ just engineers, and then obviously a bunch of product managers and designers working on this product.

New User Discovery and Experience

[01:12 / 33:33] And there’s a number of areas where we think about it quite differently from Second Life, and we did spend quite a lot of time thinking about why did Second Life hit the ceiling, if you will. You know, many years ago it peaked at 1.1 million monthly users and these days it’s around 900,000, so it’s not a huge difference from the highs and where we are today,

[01:38 / 32:59] But why didn’t it go to five million, ten million, 100 million? And what can we do to solve some of the things we thought caused it to sort-of max out there?

The Lab is hoping their next generation platform will bridge the gap between niche and mass adoption. This may prove easier said than done

[01:52 / 33:11] One area where we want to think quite differently is discovery; how do I discover an experience? Today you pretty much have to be inside Second Life to discover an experience, and we want to make it a lot easier for people to be able to discover an experience from the outside. So that you can create an experience, and [people can] much more easily find your experience and enter your experience without having to necessarily at that point being aware of the notion of this platform or what other types of things are available to them. They can discover those as they go along. Make it easier for you to bring your audience directly into your experiences.

Platform Accessibility

[02:36 / 33:55] Accessibility. Today, when you leave your PC, you pretty much leave Second Life behind, [so] what can we do to make sure it’s available on more platforms? It’s obviously getting more complicated now with all these VR platforms, so what used to be PC, windows and Mac, which we support today; and then mobile, which you can get access to today if you use a third-party service like SL Go or some other clients that support mobile.

[03:10 / 34:29] But we want to think about mobile as something we can support form the beginning; but again, the number of platforms across mobile, PCs and VR … [there’s] more and more of them. so it’s tough to keep up. So we are building a next generation platform from the ground up to make possible for us to take advantage of all these different platforms.

Scalability and Creativity

[03:37 / 34:57] Scalability. This is a really important one; an event like this highlights it. There’s a tremendous amount of effort that goes into putting on a meeting like this with just a couple of hundred people in-world. We have to put together four corners and you have to do a lot of work, and it’s still creaking at the seams as we speak, to put something like this on.

[04:06 / 35:25] We want [with the] next generation platform to make the size of an event like this to be a trivial exercise, and then figure out how, with various techniques, to make it possible to do events like this for tens of thousands of people.

[04:26 / 35:46] That’s one way to think of scalability: how do you get more people in a region, how do you get more people to be able to participate in an event at the same time. but [there’s] also the scalability for creators. How do you make it possible for creators to not only be able to reach a larger audience, but also make more money, too.

[04:44 / 36:14] Take the classroom that Texas A&M put together for teaching kids chemistry. The developers of that experience of teaching chemistry, they probably did as a one-off, for some fee, job for Texas A&M to create that classroom. When the classroom is used by students at Texas A&M, you know, 20 students, whatever, then that experience is fully in use.

[05:22 / 36:41] What if that developer could have an unlimited number of copies of that experience to rent out or sell, and every institution could use that virtual classroom all at the same time? That makes for a much more appealing prospect for a creator of an experience, and gives them a greater opportunity to monetise their experience. And then we’ll get more high-quality content creators introduced into the economy, and then everything sort-of heads upwards. So that’s something we think about a lot.

Quality and Ease of Use: Physics, Avatar Design, Shopping

[05:56 / 37:16] We also think about quality. Quality is a range of things: ease of use, quality of physics, lighting, basic performance of how smooth are things, how easy is it to do things, how natural an avatar can we make.

[06:21 / 37:41] The skeleton system in the new avatars we’re working on are way, way, way, more complex than what we have in Second Life.

[06:29 / 37:48] How can we make it easier for people to shop and get dressed and do these types of activities with much higher visual fidelity at the same time. So we think a lot about that.

Revenue Generation for the Lab

[06:46 / 38:05] And then monetisation – the way we [Linden Lab] monetise. I’d say our business model is a little be strange in Second Life today. We charge you a lot for land, and then we charge you almost nothing for all of the transactions that happen in-world. So, I’ve said this before, but generally we think about how do we lower our property taxes by a lot and at the same time, we’ll have to raise sales taxes to make some of the difference.

[07:15 / 38:35] And then also how can we build a platform that [is] technically less demanding, so that it costs us less to operate all of this content that we’re running all of the time, so that we can have a lower barrier to entry, and make it possible for people to come it and create some really interesting things at very low cost. And so that’s a big focus for us. How can we make less money per user, almost, but have a lot more users, is kind-of the core of the puzzle we’re trying to solve for.

Initial Alpha Access

[07:52 / 39:13] At the beginning, this platform we’re working on will start to reveal itself to just a few, hand-picked alpha users this summer.

[08:02 / 39:21] Those users will, for starters, need to know a tool called Maya, which is a fairly sophisticated and complicated tool which most of us normal human beings cannot even begin to think of how to use.

[08:16 / 39:36] The reason we started with that is because it would take a lot of effort for us to create those creation tools, and … by allowing third-party content tools, we can focus more of our energy on more of the runtime aspects of the environment and then layer-in more and more in-world creation tools over time.

[08:39 / 39:59 ] And Maya is just a starting point; ultimately, we want to be able to support a huge array of third-party tools: Maya, 3D Max, Sketchup – any tool that any creator is comfortable with using, we want to make it possible for them to take content from there directly into this next generation platform, and then basically just instantly walk into that content, and easily invite people into that content and start to socialise in and around that content.

[09:17 / 04:36] And after it starts to reveal itself for a few this summer, as time progresses, we’ll invite more and more people as it gets easier and easier to use, and as we figure out a lot of the bugs and issues to make it a useful experience for you, then obviously, more and more of you will be invited to come on-board over time.

[09:42 / 41:01] Meanwhile, Second Life is not going anywhere. We will continue to improve it, like I said earlier, and it could be years before any of you decide that you would rather use this new thing we’re working on versus Second Life; and that’s fine with us.

Initial Content Focus on Virtual Reality

[10:00 / 41:19] The new platform, in the beginning I mentioned accessibility and multiple platforms; but the two platforms we’re definitely focused on as number one and two, is virtual reality and PC. So any content that’s created in the new platform will be a great experience in something like Oculus and on PCs, and then we’ll continue to think about what platforms to bring on next after that. but those were the sort-of number one and two platforms that we’ll support either way.

[10:36 / 41:56] We will spend a lot of time understanding what it means to create content in the context of virtual reality hardware like the Oculus; how does that change the type of content you want to create? What kind of use-cases make sense in the context of virtual reality? So we’ll be spending quite a bit of time in there.

[10:56 / 42:17] And as all of this hardware starts to reveal itself for a few hundred bucks later this year, but more likely early next year, so we’re still a year away, I would say, from some of these HMDs or head mounted displays to start to make their way into the hands of ordinary consumers. But we want to make sure that we’re well aware of what it means to create content and experience for that content. So that’s definitely top-of-mind.

At this point, Ebbe gave his closing remarks, which were focused on education in general, rather than specific to SL or the next generation platform, and which will be available in my full transcript of the presentation.

Comments on the Next Gen Platform Arising During the Q&A Session

During the main Q&A session, several questions were asked relating to the next generation platform in particular, and Ebbe’s response to these are given below for reasons of completeness. Timestamps refer to the video only.

Accessibility for the Impaired

[44:47] Will there be voice-to-text and text-to-voice in the new platform? It’s not on the road-map right now, but I also don’t necessarily see it being a hugely complicated thing to add. there are some really great third-party services that we could hook into to made some of those capabilities possible, and it might not even be us; maybe a creator will add that functionality to the next platform.

[45:20] But it is something we talk about quite a bit, because in virtual reality, for those who have tried on some of those goggles, your keyboard and mouse suddenly feel like not proper instruments for interacting and communicating. Voice will obviously be a very natural way of interacting in VR, but that doesn’t work for everybody and in all use cases, so voice-to-text and text-to-voice makes absolute sense.

[45:49] I don’t see it as being a super difficult thing to do, I’m sure we could use some Google service or something like that to made that a pretty darn good experience without too much work. So I’m not sure when that could happen, but I’m sure it’s completely doable.

[46:32] What other work would we do with regards to accessibility … so that the next generation can easily be used by someone who is blind? It’s very early to know exactly how we’ll tackle it; it’s a good question, and I don’t have an answer. We have stated that we’re not planning for our client, at least for the beginning – and possibly never, but never say never -, but we’re not starting with it being open-source. so some of these viewers you have for Second Life can specifically target those use cases, will not be easily done. We might have to partner with some of those people to better understand how we can make sure that we build some of those things into the one viewer we will do. So I will take notes on this, and ask the product team to make sure this is in their minds as they move forward.

Content Creation and C# as the Scripting Language

[48:28] One thing is obviously, like I mentioned earlier, is that we want to make the ability to use a huge range of third-party tool [in the platform] and make sure we support sort-of common file formats as well as we can. So whether it’s .FBX or .OBJ, stuff like that, so a lot of third-party tools that do a much better job of specifically creating content for various use cases, you could leverage, which is not that easy to do in Second Life today. We want to make that very easy, support a huge number of third-party tools to be a part of the creation, or tool chain as well call it, for the next generation platform.

[49:15] The in-world tools will start to focus mostly on how you can lay things out. so I can import a lot of things, but then sort-of place them, rotate, scale, and things of that nature. And then, obviously, to ultimately make it easy for you to add scripting on things to create interactivity and other functionality.

[49:37] the scripting language will be C#, which is a good thing, because there’s obviously a lot more people who know how to do things in C# than there ever will be to learn how to do something in Linden Scripting Language, so you’ll have a real programming language to work with.

[49:53] So right there you can easily use a lot of existing talent in the world today to create some really incredible content with third-party tools and C# . you have millions and millions of people that can contribute to creating content, versus having to train someone from the ground up how to create something inside of Second Life, which has proprietary tools for 3D and scripting.

Support for voxels is under consideration with the next generation platform, to present an in-world option for content creation, terrain modification, etc.

[5024] Like I said, over time, we’re obviously going to make it easier to do layout without within the world, but we’re also exploring technologies like voxels to think of ways to make it easy for non-3D experts to be able to create environments and structures; so that’s an area we’re investing some time in right now, to understand what we can bring to the table there.

[50:50] So we can hit a much broader range of creators, from professionals who can use the tools they’re comfortable with today to hobbyists who are willing to learn some new tools and who could benefit from using things like voxel systems to easily “paint” and chip away to create terrain and tunnels and caves and stuff like that; all the way to making sure that most of us who really don’t create, but more-or-less just customise the environment, that it’s very ease for us to just furnish our house or set-up our lab or get dressed, which I would say can sometimes today be maddeningly difficult in Second Life. So we want to make that as easy as it can possibly be,

[51:47] and we also want to continue to be as open as possible, so that whatever we don’t supply, third parties can sort-of extend what we’re doing to provide additional solutions and value on top of what we’re doing.

More on Maya and Adding Other Tools

[50:30] Why do we start with Maya as opposed to something open-source like Blender? Why did we choose something that’s so expensive versus something that’s free or cheap?

[52:52] Like I said, this is very early. We started with the most sophisticated tool that allows us to create the most sophisticated content possible. Not just 3D content, but also animation, where we can get a full stress of almost every use case that we can think of, so it’s almost for our convenience. It’s not the intent that this is going to be the starting point for you guys. By the time most of you would find it worthwhile to start working in this platform, I would expect for us to have support for many other tools.

[53:34] but it was the tool that we could get our expert users to create the most variety of content and stress our engine to the maximum with the least amount of effort. So it was basically the fastest path for us to get the most complex content created as soon as possible without have to build a lot of tools to do that.

[54:04] By the time it’s more generally available, we’ll be having support for many other tools, including Blender, and ultimately, whatever tool you choose.

13+ Access

[57:45] One interesting decision we made among ourselves, just the other day, is that we want the next generation platform to be 13+; so we’re going to lower the age by which someone can participate. today I think it’s 16+, and actually we’ve realised that legally speaking, there’s no difference between 13 and 18 versus 16 and 18; so our goal is to make it something we can get users 13+ to participate in. and we therefore have to solve whatever issues arise because of that, and that’s a challenge we’ve put in front of ourselves.

Moving Content Between Platforms

[01:02:16] Again, don’t expect full backwards compatibility. just because you’ve built a fully-functional experience in Second Life doesn’t mean you can just airlift it in, and drop it in [to the new platform] and just have it work. Like I said, it will be a different scripting language, the way we think about 3D content will be different, a much more modern approach to it.

[01:02:40] So we [will] take raw content from the outside world, and we will be able to convert that into our internal format, which will be a highly optimised format for a sort-of run-time, and it’s not even clear to us if we will preserve the original format in any shape or form because it does get converted, like I said, a highly optimised runt-time type of experience.

[01:03:12] So, exporting a full experience, it’s not clear how easy it would be, but you obviously still have full control over the content that you originally imported in the first place.

[01:03:26] And also not clear [is] where you could easily drop that content in. I’m not necessarily expecting that there will be lots of different worlds which are compatible with each other, where you can just easily take everything that you do and just move it other to the next one.

[01:03:45] A lot of things will be different. Our scripting language will be different; the way we think about 3D content will be different; the way we think about avatars and skeleton systems will be different. so we’re focused more on high performance and high quality, and probably less about portability would love for a day for [things] to be more portable, but it’s not our main focus.

[01:04:13] A lot of other companies and groups are thinking about universal standards and common ways of describing content and scripting and whatnot so that it could be portable. and maybe when those standards come into place, we will participate, but I think it will be a long time before you can have twelve different companies creating twelve different platforms for having virtual experiences having easily interchange of content among themselves. It would just add so much complexity. And ultimately, to have that succeed you have to target lowest common denominator, which doesn’t necessarily bring you into the future quickly enough. so that’s our current perspective.

Other Points of Note

Items with timestamps can be found within the meeting video, those without a timestamp were raised in the additional chat conversation with Ebbe.

[00:16:18] As has been repeatedly stated, the new platform will not be 100 % compatible with Second Life, so direct content migration in all cases will not be possible; however, import of things like mesh and textures held locally will most likely be possible

[00:19:03] It is anticipated that with the next generation platform, content would not have to disappear permanently due to financial constraints on the part of the creator in meeting costs; if something is not visited very often, it can be stored off-line and very quickly brought back quickly should someone wish to visit it

[00:21:03] Third-party authentication and access control to experiences is being built-in to the foundation of the next generation platform, which should help organisations to manage access to their experiences using tools already at their disposal

[01:07:22] Very little, if any, functionality is being carried forward from Second Life; almost everything is being built from the ground up, including how inventory is managed

People will be able to preserve their SL identities and use them on the next generation platform, and will be able to move back and forth between the two, once the later is more open to users

The new platform is still over a year from general availability

A “master account” system is being considered, such that multiple avatar accounts will be possible under a single user account. whether this will include ability to move inventory between accounts is not at this time clear

On the subject of land:

Individual land areas will be much larger than the SL concept of regions, potentially “thousands” of metres across, and will support the concept of a “mainland” environment

Land areas can be connected, but the mechanism for moving between them has yet to be decided; a gateway system is one idea being considered

The Lab does not view the new platform as a contiguous “world” with a unified “geography”as SL is generally seen – hence the use of “platform”, rather than “world”; as such it might be analogous to a series of interconnected experiences

Experiences are likely to be instance-based; when an avatar limit is reached, an additional iteration is created, allowing more people to engage in it (“With instancing, we create an experience that is optimum with 150 users, but when it reaches that, spin up another one””)

A recalcitrant PC made this one a bit of task. I’ll have a further piece on his presentation, covering education and SL out in the next 24 hours; i went with the next gen platform info first, as this seems to have gained the most attention from the feedback / IMs I’ve had!

No more mainland 😦 For me, the main difference between SL and HiFi was HiFi = sims managed like domains, SL = massive landmass. User made mainland is quite risky, as soon as the person behind it doesn’t want to do it anymore/dies/runs out of money, it’s over. What do you think about the Labs decision?

Actually, if you at the extra notes, Ebbe indicated that a “mainland” should be achievable in the new platform; the question would seem to be how contiguous it will really be, and how avatars more between “zones” or “land areas” (or whatever it is that’s analogous to regions in SL.

if I understand what Ebbe said correctly, anything we build will be permanent and land units will be much larger than current regions. I take that to mean “Mainland” will be up to us to create by (hopefully) linking land areas together (I could envision some kind of gate between areas since he implies there will not be open region boundaries like in SL.

BTW, I nominate Linden World 2 for a placeholder name. It would align with the naming history of Second Life.

So, great… C#… An ancient 3rd-generation language, where I will have to write 30 lines of code to do the exact thing I can do with one line in LSL (a function-based, 4GL language). If this new platform is going to provide us with a better environment, then why can’t we improve on the scripting technology as well?

You’re sure you’re not confusing something? C# is by no means ancient, it’s a modern, object-oriented, managed programming language which has been introduced by Microsoft as part of their .NET technology in 2001, constantly evolved since then and been ported to about every OS via the Mono project. Today, it’s fueling thousands of professional applications, including full game projects in the Unity3D engine…
I can certainly complete the same task in 1/10th of the time and1/3rd of the lines of code in C# than in that sorry excuse of a programming language called LSL, which is supposed to have been developed in a day and never has seen any improvement since then…

I think you may be confusing C# with another language, it’s an object oriented language and widely used today for all sorts of reasons, it’s around 15 years old, so not much older than LSL but far more widely used.

Kids use the internet. Things parents would not want their children to seen are endemic to the Internet, a lot of it viewable event before things like credit card details have been handed over. Within Second Life, and despite pronouncements of doom and gloom following the demise of the Teen Grid, Second Life seems to have managed segregation based on age pretty well, without tabloid drama, and in a way the provides those providing services to under-18s with a reasonable degree of security.

As it is, the new platform looks set to actually have a whole series of “teen grids” (to use that analogy) within it. Because those focusing on working with under-18s will be able to set-up their “experiences” (be it classrooms, games, study areas, labs, etc.), and fully and easily define and manage how people can connect to it both from outside and inside the platform, and – I suspect, but cannot confirm – define precisely how their “experience” connects to other “experiences”, and possibly how it is even seen by “general” users.

If I’m right on my assumption here, then the Lab could potentially have an enormously powerful platform which could, for example, fully service the education community globally, allow schools, colleges and universities build their environments, establish controlled gateways between one another to offer collaborative learning spaces, etc., and essentially build their own “world” of connected “experiences” within the platform that never has to touch anything else that is going on within the same platform unless they want it to.

Not open source… so, most likely, “We know what’s best, we get to stick you with V2-crap and you ahve to deal with it” is what I hear. Well, they can build that battleship if they like, but it’s a carrier-group world. A fantastic world with a crap UI will be ignored as unusable crap — and DESERVE to be.

“People will be able to preserve their SL identities and use them on the next generation platform, and will be able to move back and forth between the two, once the later is more open to users”
Very excited to hear that! One of the most asked questions and knowing that, even if I can’t take my things from SL, I can at least have the same identity, is a big deal to me.

I found the opening keynote speech from Ebbe interesting and he really did a great job this year. Glad to know that Second Life will continue however there was no mention on OpenSim or third party viewers. It would be nice to hear some exciting news in those areas. I mean Ebbe did answers a ton of questions and stayed on extra time. Sounds good in the other areas.

The question about brining back the mentor program is what I asked and I was rather disappointed by the answer and its a real shame. I asked about if the lab could add more Linden office hours and again there’s no plans to do so yet.

The next generation platform it was good to hear more updates on this but I hope it’s worth this long wait. I think that it won’t be public available until late 2016/2017. I’ll give it a try and see what its like.

I would hope that an OS version of this new world will never ever ever happen.
It never benefited linden lab or the residents. with our world in a very slow growth decline.
Open Sim should look towards High Fidelity. its open source.

“SL might eventually be layered on top of the new platform”
that might mean the grid no longer supports 3rd party viewers
SL might one day be merged into the new grid under the new name one day as well.

“SL might eventually be layered on top of the new platform” — that did indeed catch my attention, too, because it implies some quite interesting things at the platform level, and, indirectly, it also implies that each ‘experience’ or ‘world’ might, in fact, work quite differently (and even use different viewers). Not in 2016/7 for sure, but maybe in 2020 and beyond. That’s actually pretty interesting.

I’m actually sorry to see that visual contiguity, which has been one of the most innovative concepts brought to Second Life back in 2003, is going to be dropped as a concept, even if ‘experiences’ might become huge in size. Then again, I guess that comes from recognising that no other VW platform (including, of course, games) bothers with visual contiguity, and, even in SL, almost all private regions, be they small or large (i.e. several joined together), are also not visually contiguous.

Nevertheless, it would be interesting to understand if several groups wished to have their separately-managed ‘experiences’ in visual proximity to each other, how that would work on the Next Platform (probably not at all). Think about the sailing communities in SL as an example of such motivations.

Minor details, really; without having anything concrete to see, it will be hard to speculate of what might or might not be possible to do…

My own thinking has been from the start that the new platform has never been about a single “world” – but rather multiple “worlds” (hence why I’ve tended to refer to it at the Lab’s new “virtual worldS platform” – the “s” being deliberate.

The Lab has developed something of a recent history of trying to position itself away from being a service provider and into the realm of platform provisioning, so this would make further sense in the context of the nest generation project.

As you say, it means they can service different markets / vertical according to the needs of those markets / verticals, and better scale their product to meet the demand of each market / vertical far more easily that having a “one size fits all” service. So the implications here – speculation allowing – are quite staggering in terms of what might be achieved and the level of appeal the Lab is aiming for; again, hence the decision to make the platform’s interoperability with third-party tools and applications as broad as possible.

I am not sure I understand the integration of Maya as exclusive. Is he suggesting that Maya is somehow integrated into the platform? We can already create mesh with any number of tools using the supported DAE file format. I would like to see some metrics of what 3D content tools are used currently for mesh in SL. I rather doubt you would see a high percentage of Maya.

No, it’s not exclusive or integrated. What Ebbe is saying is that the new platform will eventually support a wide range of content creation programmes. However, for the initial alpha period, those invited into the new platform as content creators will be expected to be able to use Maya, as this is the tool the Lab have opted to use for initial “stress testing” of the platform.

This seems to be for a number of reasons. The first because Maya happened to be the tools that offered the greatest scope for testing the system, as it can be used for a wide variety of content elements – models, animations, etc.

It also seems to be the tool of choice for whomever it is they are partnering with to provide initial content for testing (and other?) purposes.

Finally, and I’m guessing here, but I assume the Lab’s view is that if they can get the optimisation engine (streaming?) to work with all content produced in Maya, they feel they can get it to work with almost anything, given enough time to bang on things.

So, Maya is only a “first instance” tool; support for others will be added.

Will there be only a limited usage of Maya? Perhaps; but to me, the fact that they are using it as their initial baseline, again speaks volumes as to the markets the Lab is looking to access through their new product. I’ll have more to say on that over the weekend, all things being equal!

Ebbe Altberg explained that, for alpha tests, they have to start with something. They chose to start with Maya but it’s only the first step.

My reading and my interpretation is that one of the concept they’re trying to aim through this nextgen platform is some sort of non-stop experience. We would be able to create contents inside and outside the platform. We would be able to log in from a pc, from a mobile or from a third party service. Creations would remain permanently inworld. Etc…

THIS IS A SCHEDULED EVENT Jun 10, 07:00 - 15:00 PDTJun 5, 15:17 PDTScheduled - We will be performing rolling restarts for regions on the RC Channels on Wednesday, June 10th beginning at 7:00 AM PST. Please refrain from rezzing no copy objects and remember to save all builds. Please check this blog for updates.

THIS IS A SCHEDULED EVENT Jun 9, 03:00 - 13:00 PDTJun 5, 15:13 PDTScheduled - We are performing rolling restarts for regions on the Second Life Main Channel on Tuesday, June 9th. Please refrain from rezzing no copy objects and remember to save all builds. Please check this blog for updates.

Flickr Photos

All other trademarks found within this blog are properties of their respective owners, and are duly acknowledged; no attempt to infringe on any such copyright or trademark is intended.

Unless expressly stated otherwise, no affiliation with, or sponsorship by, any platform or entity mentioned in these pages should be assumed.

Comments submitted to these pages represent the views and opinions of those authoring them, and do not constitute any endorsement on the part of the author of this blog.

Links to other web / internet locations are offered as a convenience only. No warranty, express or implied, nor any legal liability is assumed for the accuracy, completeness, or usefulness of any information, product, or service offered at or through such linked sites, or for any consequences arising from the use of such links.