Feeds:
Posts
Comments

Archive for the ‘Massive Multiplayer Environments’ Category

The problem that designers of virtual worlds face today is the difficulty in creating forms to represent in a meaningful way the diverse bundles of narratives that we have become identified with. One point of failure in avatar design is the absence of a tight coupling between user intentions/behaviors and those of his/her avatar. This in turn destroys the effectiveness of the user representation/avatar as a reliable canvas on which can be painted, for an audience/observer, the wide range of emotions and related information we have been accustomed to in real world human communication processes.

When interacting in (Second Life) SL, we are pervaded with a sustained sense of ‘uneasiness’, ‘unsatisfactoriness’, ‘a hunger for emotional exchange’ to a large extent because we no longer use a significant part of our brain that is wired for face processing. Often the case has been made that SL provides a higher emotional bandwidth that other communication media. It is important to know what exactly the elements of this comparison are. If SL is compared to a text chat environment, then yes, SL does provide more emotional bandwidth. The next question is how much is gained, and if the amount gained is worth the cost. Now let’s compare SL to a video conferencing application. A video conferencing application provides opportunities to engage the vast face processing capabilities of our brains. It is disingenuous to claim that SL provides a higher emotional bandwidth than a videoconferencing application. Thus pitting SL as it stands against videoconferencing is a non-starter especially for meeting situations where the importance of spatial context (e.g. whether it is in a virtual board room or a virtual rest room) is meaningless. We might improve human-human communication in virtual environments however if we try to merge video conferencing and SL. Let us explore ways that will provide users the opportunity to make use of their untapped face processing capabilities. I will only suggest one way,there must be many more.

May be we should suspend, for a while at least, talking about avatars and really start focusing on surrogates. Surrogates as a term suggests a weaker user-representation coupling than avatar does. This slight shift in the way we frame human-human interactions problem in a virtual world frees us from our obsession with trying to create avatars that is tightly coupled to the user, where attempts are made to capture every gesture and emotion of a user for reproduction in a virtual world. Most typically, this is achieved by recreating a quasi-mirror image of the user (e.g. in 3d using gesture tracking mechanisms, 3d cams, physiological signal monitoring and so forth). Quasi, because it won’t be too much fun if the precise physical status of users are mirrored in virtual environments. A virtual environment where everyone is in a sitting posture will be quite boring. Research in this area is much needed and this approach has a wide active fan base but I doubt we will see realistic 3d mirror images of users within 5 years. In addition, each of the technologies involved with come with a level of obtrusiveness (e.g.tethered devices, cumbersome calibration set ups etc..) that will scare off users and probably spike their subjective workload, frustration levels, physical fatigue and so forth. Let us look at more near term solutions. And if we focus on surrogates, may be we will be happier to inject some AI into our ‘avatars’ so that they get to ‘represent’ us rather us controlling them. Anyway, this topic is for a different occasion.

SL with audio conferencing has helped to address floor control issues faced by traditional audio conferencing applications in a very obvious and natural way. We can expect that SL with video conferencing might also help to solve some issues we face in traditional video conferencing for e.g. talking heads in windows with no spatial context. One seemingly natural integration with video conferencing that comes to mind is to have chat bubbles replaced by a video stream about the user, a video bubble. The user can choose to point his/her camera to whatever he or she wants. In a show and tell session, the user may choose to point his camera to what s/he is doing. At other times, he may point the camera to his or her face. Now, only users in close proximity to an ‘avatar/your surrogate’ with have their ‘video bubble’ activated. Proximity does not only mediate audio but video as well. Which video streams get activated will be based on the proximity of ‘avatars’ so that users don’t get visually swamped, minimize occlusion and bandwidth problems etc… This according to me is a possible near term solution LL could try selling if it wants to pitch SL against video conferencing applications. SL then can claim that it does something more than video conferencing…because video conferencing is part of it…and then it will become obvious that the telepresence solution from Cisco is about mirroring, but SL is more than mirroring.

In my view, the virtual environment of the NEAR future will be desktop based, point and trigger and provide the space to contain 3d audio conferencing+ video conferencing (as ‘video bubbles or some variants of that) + information sharing (basically document/web sharing). This solution will address the emotional bandwidth issue more convincingly. Does this approach going to hurt other approaches looking at creating 3d mirror images of users and the future gesture tracking applications etc…? Certainly not. Video bubbles will probably die a peaceful death when we work out all the kinks with creating 3d mirror images with a fidelity level that can cross the uncanny valley…and can produce micro facial gestures and so forth… But video bubbles look feasible right now and the technology is certainly closer at hand. This approach raises many more questions, what will ‘avatar’ body gestures do? will they be communicating anything..etc..this is besides the point, right now am trying to address the emotional bandwidth issue in the NEAR term. The body of the avatars will still have a function. They can be animated in various ways to add context to human human interactions. The potential of video bubbles for griefing purposes can be dealt with easily in the same way  audio griefing was dealt with.

Read Full Post »

http://www.justleapin.com/technical

JustLeapIn fits in the same category as Google Lively. I am a little surprised, well somewhat, at the number of browser plugin based 3D worlds re-emerging on the market because there has been a lot of earlier efforts that did not pan out such as the 3DState effort like 8 years ago and Cosmo (VRML plugin from Sun) much earlier. In my opinion, Lively or JustLeapIn ‘worlds’ will function as a low barrier entry to the 3D Internet but they do not seem to have the back end to support deeply plastic virtual spaces. The art path that JustLeapIn provides is not clear probably because it is still under development. However, at this point in the game, they will provide the kind of surface, cosmetic level customizations that 80% of users expect mainly for social applications but certainly not for  simulations-for-training applications that require deep level customizations. I think that we will probably see less diversity regarding content and probably less interactive content in such virtual environments but, hey at least, a wider audience will become familiar with 3d space navigation using arrow keys.

Technorati Tags: ,

Read Full Post »

http://technology.timesonline.co.uk/tol/news/tech_and_web/article4557935.ece

Image Metrics has emerged as leader in the creation of realistic facial animations. This of course has a lot of implication regarding what will be possible in the virtual worlds of the future. Check out the animation. She is considered to be one of the first animations to have overleapt a long-standing barrier known as ‘uncanny valley’ – which refers to the perception that animation looks less realistic as it approaches human likeness.

Some other informative video clips on this topic is available here.

Technorati Tags: ,,

Read Full Post »

We are seeing more and more games designed to include user created content. Spore is one example, LittleBigPlanet is another.

 

Technorati Tags: ,

Read Full Post »

For many who tried the Vollee client to access Second life on their cell phones (there are only 40 handsets which are supported right now), the experience was just awesome. The video quality and performance looks actually good. I expect a deluge of sign ups for this.

Read Full Post »

There has been an ongoing debate in deeply customizable massive multiplayer environments such as Second Life about whether users really want to create content. One of the ways to address this question is of course  to just get the data from the virtual world itself. Another way is to try to see what is happening regarding other forms of content on the web. I also think there is the problem of defining content as well. Is a comment posted on a blog ‘content’? Yes and no. One could argue that a comment could be so well informed, structured and informative that it could indeed be considered content in its own right. In any case, am assuming that the surveys were intelligent enough to take care of these possibilities.

Here are a few graphs that provide a good bird’s eye view of the current state of  user-generated content (not quality wise but quantity wise). A more rigorous study would have included a metric that included the number of hits to the content as well. It is also important to keep in mind the growth of Internet usage before giving meaning to the data from eMarketer regarding the growth of user-content generation.

From, World Internet Usage Statistics News and Population Stats,

the number of millions of users across regions of the world is as follows:

1. Asia (462), 2. Europe (344), 3. North America (237), 4. Africa (44), 5. World Total (1262)

the penetration rate of the Internet is as follows:

1. North America (71%), 2. Oceania/Australia (57%), 3. Europe (43%), 4. Africa (5 %), 5. world average (19%)

So it is reasonable to assume that the growth we see in graph 1 can be explained by the growth in Internet penetration.

From graph 2, it appears that there is significant money to be made from advertising revenues especially from the emergence of more technologies such as Adsense and variants, which are now expanding into the realm of video and audio as well. We are going to see more and more the integration of ads in audio and video, even ‘non-commercial’ user generated audio and video. Just like folks hated advertising for text, the same will happen for video and audio, but after some time, everybody will live with it. It’s unfortunate, that’s the way it is going to be. May be not, we might see another market opportunity for advertising strippers. So I will take back my works regarding my certainty that folks will have to live with advertising on pretty much anything.

From Table 1, what interested me is the fact that 35% of all US users generated their own ‘content’. This is a pretty high percentage, but then the spectrum of content is quite wide here (probably including video comments as content on youtube as well). Right now, it is estimated that ONLY 10% of users of Second Life create content. This figure is believable given the complexity of this type of content creation when compared to some of the easier forms of content creation such as blogging (technically speaking, process-wise, creation tool usability-wise)

So here’s my prediction: I estimate the upper bound for user-content generation in Second Life to be around 35%. How long will it take for the 10% to grow to 35%? 3-4 years.

 

graph 1

 

US User-Generated Content Advertising Revenues, 2006-2011 (millions)

graph 2

 

US Internet Users Who Create Their Own Online Content, by Access Technology, 2005

Table 1

 

Demographic Profile of US Internet Users Who Post Online Content, 2005 (% of respondents in each group)

Table 2

Read Full Post »

Often Second Life has been misunderstood for being a game, an especially uninteresting one, with poor graphics and with no narratives that would bore anyone who has the courage to pain through the orientation islands. I often thought that those criticisms told a lot more about the critics than Second Life, the platform. In fact, every attempt to digg something positive about Second Life is always met with a lot of derogatory comments from a wide spectrum of users, the most venomous critics coming from the ‘hard core’ gaming community. Here’s an example of a recent machinima made in Second Life. This is not your usual ‘halo’ or ‘Call of Duty’ variety of machinimas, where the constraints of a ready made environment and standard ‘game like’ avatars always  stiffen any story line unless it is closely tied to the game. Second Life is the only massive multiplayer environment right now that allows the kind of deep customizations and clean animations that a serious animator would need in order to treat elaborate subjects. Check out this animation by Lainy Voom to experience the kind of ambience and effects that Second Life can provide today (you will at this time need to use the windlight client). There is only one thing that remains to be added to kick Second Life to the next level and that is lip synching.  But for now, if you can imagine a story line that will not involve speaking avatars, that is high on ambience, based on a cool narrative from a poem for e.g., why not try to use Second Life to create your first machinima. You might be surprised at what you can create.

Read Full Post »

Older Posts »