Feeds:
Posts
Comments

I was looking for a peaceful musical ambience for my office and I think Chaurasia fits the bill for now.

This article from Modern Mechanix, an August 1935 publication, seems to be describing one of the functionalities of Twitter or other similar Web 2.0 applications.

 image

Step 1 Once the moderator of the WizIQ webconference session sends an invite to your email address, you will receive a message similar to the one below.

step 1 email invite

Step 2 Clicking on the ‘enter’ link will lead to the following page. If you already have an account, you just sign in. If you do not have an account you will have register/join.

step 2 Session details and prompt to log in

Step 3 Registering/Joining is pretty fast. Just type in your name, choose a password and type in the text that you are able to read in the section above the textbox.

step 3 Join now if not a member already

Step 4 Next, you will be presented with the opportunity to click on the Launch session button. You will find that once this is done, you may see a software downloading and getting installed within your browser.

step 4 click on launch session

Step 5 If the event has not started yet, you will get the following message. You may choose to set up your audio or video at this point. To do this, click on device settings and follow instructions.

step 5 wait if session has not started yet

Step 6 When a session starts, and after you have joined, you will see the following. If the moderator is broadcasting video, you will also find a video clip in the top right section. If you want to send a message to the class, just type in the textbox in the bottom right.

step 6 webconferencing session

Step 7 You can access your device settings by clicking on the wheel icon found just below the video section.

step 7 device settings

I was checking out a few articles from the Popular Science Magazine of February 1920. Here are two articles that amused me probably because they look like some of the ‘sold/not sold’ items on Jay Leno’s show.

image

 

image

The problem that designers of virtual worlds face today is the difficulty in creating forms to represent in a meaningful way the diverse bundles of narratives that we have become identified with. One point of failure in avatar design is the absence of a tight coupling between user intentions/behaviors and those of his/her avatar. This in turn destroys the effectiveness of the user representation/avatar as a reliable canvas on which can be painted, for an audience/observer, the wide range of emotions and related information we have been accustomed to in real world human communication processes.

When interacting in (Second Life) SL, we are pervaded with a sustained sense of ‘uneasiness’, ‘unsatisfactoriness’, ‘a hunger for emotional exchange’ to a large extent because we no longer use a significant part of our brain that is wired for face processing. Often the case has been made that SL provides a higher emotional bandwidth that other communication media. It is important to know what exactly the elements of this comparison are. If SL is compared to a text chat environment, then yes, SL does provide more emotional bandwidth. The next question is how much is gained, and if the amount gained is worth the cost. Now let’s compare SL to a video conferencing application. A video conferencing application provides opportunities to engage the vast face processing capabilities of our brains. It is disingenuous to claim that SL provides a higher emotional bandwidth than a videoconferencing application. Thus pitting SL as it stands against videoconferencing is a non-starter especially for meeting situations where the importance of spatial context (e.g. whether it is in a virtual board room or a virtual rest room) is meaningless. We might improve human-human communication in virtual environments however if we try to merge video conferencing and SL. Let us explore ways that will provide users the opportunity to make use of their untapped face processing capabilities. I will only suggest one way,there must be many more.

May be we should suspend, for a while at least, talking about avatars and really start focusing on surrogates. Surrogates as a term suggests a weaker user-representation coupling than avatar does. This slight shift in the way we frame human-human interactions problem in a virtual world frees us from our obsession with trying to create avatars that is tightly coupled to the user, where attempts are made to capture every gesture and emotion of a user for reproduction in a virtual world. Most typically, this is achieved by recreating a quasi-mirror image of the user (e.g. in 3d using gesture tracking mechanisms, 3d cams, physiological signal monitoring and so forth). Quasi, because it won’t be too much fun if the precise physical status of users are mirrored in virtual environments. A virtual environment where everyone is in a sitting posture will be quite boring. Research in this area is much needed and this approach has a wide active fan base but I doubt we will see realistic 3d mirror images of users within 5 years. In addition, each of the technologies involved with come with a level of obtrusiveness (e.g.tethered devices, cumbersome calibration set ups etc..) that will scare off users and probably spike their subjective workload, frustration levels, physical fatigue and so forth. Let us look at more near term solutions. And if we focus on surrogates, may be we will be happier to inject some AI into our ‘avatars’ so that they get to ‘represent’ us rather us controlling them. Anyway, this topic is for a different occasion.

SL with audio conferencing has helped to address floor control issues faced by traditional audio conferencing applications in a very obvious and natural way. We can expect that SL with video conferencing might also help to solve some issues we face in traditional video conferencing for e.g. talking heads in windows with no spatial context. One seemingly natural integration with video conferencing that comes to mind is to have chat bubbles replaced by a video stream about the user, a video bubble. The user can choose to point his/her camera to whatever he or she wants. In a show and tell session, the user may choose to point his camera to what s/he is doing. At other times, he may point the camera to his or her face. Now, only users in close proximity to an ‘avatar/your surrogate’ with have their ‘video bubble’ activated. Proximity does not only mediate audio but video as well. Which video streams get activated will be based on the proximity of ‘avatars’ so that users don’t get visually swamped, minimize occlusion and bandwidth problems etc… This according to me is a possible near term solution LL could try selling if it wants to pitch SL against video conferencing applications. SL then can claim that it does something more than video conferencing…because video conferencing is part of it…and then it will become obvious that the telepresence solution from Cisco is about mirroring, but SL is more than mirroring.

In my view, the virtual environment of the NEAR future will be desktop based, point and trigger and provide the space to contain 3d audio conferencing+ video conferencing (as ‘video bubbles or some variants of that) + information sharing (basically document/web sharing). This solution will address the emotional bandwidth issue more convincingly. Does this approach going to hurt other approaches looking at creating 3d mirror images of users and the future gesture tracking applications etc…? Certainly not. Video bubbles will probably die a peaceful death when we work out all the kinks with creating 3d mirror images with a fidelity level that can cross the uncanny valley…and can produce micro facial gestures and so forth… But video bubbles look feasible right now and the technology is certainly closer at hand. This approach raises many more questions, what will ‘avatar’ body gestures do? will they be communicating anything..etc..this is besides the point, right now am trying to address the emotional bandwidth issue in the NEAR term. The body of the avatars will still have a function. They can be animated in various ways to add context to human human interactions. The potential of video bubbles for griefing purposes can be dealt with easily in the same way  audio griefing was dealt with.

 

This environment was used to train the interview skills of border guards. It is reported that there was an increase in average marks from 58% to 86%.

http://www.virtualworldsnews.com/2008/09/quick-stat-seco.html

 

Technorati Tags:

http://www.justleapin.com/technical

JustLeapIn fits in the same category as Google Lively. I am a little surprised, well somewhat, at the number of browser plugin based 3D worlds re-emerging on the market because there has been a lot of earlier efforts that did not pan out such as the 3DState effort like 8 years ago and Cosmo (VRML plugin from Sun) much earlier. In my opinion, Lively or JustLeapIn ‘worlds’ will function as a low barrier entry to the 3D Internet but they do not seem to have the back end to support deeply plastic virtual spaces. The art path that JustLeapIn provides is not clear probably because it is still under development. However, at this point in the game, they will provide the kind of surface, cosmetic level customizations that 80% of users expect mainly for social applications but certainly not for  simulations-for-training applications that require deep level customizations. I think that we will probably see less diversity regarding content and probably less interactive content in such virtual environments but, hey at least, a wider audience will become familiar with 3d space navigation using arrow keys.

Technorati Tags: ,