Second life is an excellent example of educational gaming
But I think after working on VR headsets, in particular, for the last five or six years with High Fidelity, we discovered how difficult it is to actually try to try to make that final jump to getting everybody using this stuff. And then I think the second thing is, I'm really concerned that and I said this all along with Second Life too, so my tone hasn't changed on this any single-company, advertising-based, attention-based strategy for building virtual spaces would potentially be extremely damaging to people.
I have become much more concerned than I was before. I think that we just didn't think about all the things that could go wrong 20 years ago. But now with the benefit of hindsight it's more obvious what we need to be concerned about. Spectrum: At a more practical level, with Second Life you were running large social virtual experiences similar to what these other companies are now proposing. What were the biggest challenges? Rosedale: One is how many people can be in the same place at the same time.
Many human experiences that are interesting often require more than people to be within earshot and visibility of each other. That's still a largely unsolved technical problem. Whether you're talking about Fortnite, or Second Life, or Roblox, it isn't yet possible to get that number of people in the same place. The famous Fortnite concerts that we've seen , all of them had less than people together in copies of the concert space.
And that's a very different experience than what we expect if we go to a live music event. Another one would be user generated content. For any of these metaverse ideas to pan out, the content, the avatars, the buildings, the experiences, the games, they need to be entirely buildable by a really large number of people in much the same way that websites were buildable in parallel by a lot of people at once.
We have to do the same thing with the metaverse, and there are not, as yet, toolkits and systems that would enable that. That definitely seems like a hard requirement to get anything near the scale of the Internet. If you actually want to build a multi-billion scale virtual world, everyone will have to somehow work in parallel. To get all those spaces up, the idea that it would all be done by one company like Facebook or Google or Apple seems completely impractical.
The other thing we need is a digital currency so people can engage in trade. To get metaverse systems where one person can make a car and sell it to a lot of other people requires that you have some currency system that spans multiple local currencies.
We do have that to some extent in the cryptocurrencies, but they have other problems. Spectrum: What are you doing with your current company High Fidelity? Do you see the products you're producing as components of a future metaverse or is it something different?
Rosedale: We're entirely focused on spatial audio right now, because it's a good business and it's growing quickly. The ability to do good 3D audio for a whole bunch of people at the same time, is a critical component of this stuff. We also think it's progressive, it's a reasonable thing to work on that we can get working. We've been enthusiastic about every component of this, but we do feel like the audio is the best underlying component that everybody's going to need. We do continue as thinkers and leaders in the space to keep looking at it and thinking about what happens next and how we can help.
I mean, I love this stuff. I'm always going to be working on it one way or another. Spectrum: VR adoption has always lagged expectations. Do you see any reason why that might be different this time around? Rosedale: In a word, no. I don't see a magic, new thing. I hoped that the VR headsets would be that.
And that's why we raised so much money, hired so many people and did so much work on that in the first stage of High Fidelity. But I do think that the technical problems in front of us around comfort, typing speed, and then communicating comfortably with others are still very daunting.
And so I don't think there's anything new. Edd Gent is a freelance science and technology writer based in Bangalore, India. His writing focuses on emerging technologies across computing, engineering, energy and bioscience.
He's on Twitter at EddytheGent and email at edd dot gent at outlook dot com. His public key is here. DM for Signal info. Rosedale made a few comments about typing being limited with VR headsets. Seems like that has gotten pretty good and with the newest AI deep learning could get very good at translating voice to text. Wouldn't that be a solution for this problem at least?
This computer rendering depicts the pattern on a photonic chip that the author and his colleagues have devised for performing neural-network calculations using light. Think of the many tasks to which computers are being applied that in the not-so-distant past required human intuition.
Computers routinely identify objects in images, transcribe speech, translate between languages, diagnose medical conditions, play complex games, and drive cars. The technique that has empowered these stunning developments is called deep learning, a term that refers to mathematical models known as artificial neural networks. Deep learning is a subfield of machine learning, a branch of computer science based on fitting complex models to data.
While machine learning has been around a long time, deep learning has taken on a life of its own lately. The reason for that has mostly to do with the increasing amounts of computing power that have become widely available—along with the burgeoning quantities of data that can be easily harvested and used to train neural networks. The amount of computing power at people's fingertips started growing in leaps and bounds at the turn of the millennium, when graphical processing units GPUs began to be harnessed for nongraphical calculations , a trend that has become increasingly pervasive over the past decade.
But the computing demands of deep learning have been rising even faster. This dynamic has spurred engineers to develop electronic hardware accelerators specifically targeted to deep learning, Google's Tensor Processing Unit TPU being a prime example. Here, I will describe a very different approach to this problem—using optical processors to carry out neural-network calculations with photons instead of electrons.
To understand how optics can serve here, you need to know a little bit about how computers currently carry out neural-network calculations. So bear with me as I outline what goes on under the hood. Almost invariably, artificial neurons are constructed using special software running on digital electronic computers of some sort.
That software provides a given neuron with multiple inputs and one output. The state of each neuron depends on the weighted sum of its inputs, to which a nonlinear function, called an activation function, is applied.
The result, the output of this neuron, then becomes an input for various other neurons. For computational efficiency, these neurons are grouped into layers, with neurons connected only to neurons in adjacent layers. The benefit of arranging things that way, as opposed to allowing connections between any two neurons, is that it allows certain mathematical tricks of linear algebra to be used to speed the calculations.
While they are not the whole story, these linear-algebra calculations are the most computationally demanding part of deep learning, particularly as the size of the network grows. This is true for both training the process of determining what weights to apply to the inputs for each neuron and for inference when the neural network is providing the desired results. What are these mysterious linear-algebra calculations?
They aren't so complicated really. They involve operations on matrices , which are just rectangular arrays of numbers—spreadsheets if you will, minus the descriptive column headers you might find in a typical Excel file.
This is great news because modern computer hardware has been very well optimized for matrix operations, which were the bread and butter of high-performance computing long before deep learning became popular.
The relevant matrix calculations for deep learning boil down to a large number of multiply-and-accumulate operations, whereby pairs of numbers are multiplied together and their products are added up.
Two beams whose electric fields are proportional to the numbers to be multiplied, x and y , impinge on a beam splitter blue square. The beams leaving the beam splitter shine on photodetectors ovals , which provide electrical signals proportional to these electric fields squared. Inverting one photodetector signal and adding it to the other then results in a signal proportional to the product of the two inputs.
David Schneider. Over the years, deep learning has required an ever-growing number of these multiply-and-accumulate operations. Consider LeNet , a pioneering deep neural network, designed to do image classification. In it was shown to outperform other machine techniques for recognizing handwritten letters and numerals. But by AlexNet , a neural network that crunched through about 1, times as many multiply-and-accumulate operations as LeNet, was able to recognize thousands of different types of objects in images.
Advancing from LeNet's initial success to AlexNet required almost 11 doublings of computing performance. During the 14 years that took, Moore's law provided much of that increase. The challenge has been to keep this trend going now that Moore's law is running out of steam.
The usual solution is simply to throw more computing resources—along with time, money, and energy—at the problem. As a result, training today's large neural networks often has a significant environmental footprint. One study found, for example, that training a certain deep neural network for natural-language processing produced five times the CO 2 emissions typically associated with driving an automobile over its lifetime.
Improvements in digital electronic computers allowed deep learning to blossom, to be sure. Not all Second Life users will be able to access Skill Gaming Regions and we do not want to block residents from accessing any region on the Mainland, so we do not plan to permit the Skill Gaming Region designation for any portion of the Mainland at this time. We cannot provide legal advice or analysis of your activities under applicable law. Our revised policy will become effective in Second Life on September 1, When you attempt to enter a Skill Gaming Region, a check will occur behind the scenes to confirm whether the activity is permitted in your jurisdiction and if you meet the age requirements for this activity.
You can check the list of prohibited jurisdictions and age requirements here. It is your responsibility to know which jurisdictions are prohibited and to refrain from activity in a Skill Gaming Region if you fail to meet the requirements. Attempts to circumvent our entry restrictions will be a violation of our Terms of Service and may result in termination of your account in Second Life. The easiest way is simply to attempt to enter a Skill Gaming Region.
You can check the list of prohibited jurisdictions and age requirements. Any approved game of skill that requires or permits Linden Dollars to participate and provides a payout in Linden Dollars is subject to this policy. Any free play skill game not on the approved list of games of skill that does not require or permit Linden Dollars to participate and that offers Linden Dollar payout is not subject to this policy.
Each inworld object implementing a game of skill may only be owned by a single approved operator; group ownership of such objects will not be allowed. If you have created a game of skill that requires or permits Linden Dollars to participate and provides a payout in Linden Dollars, you are subject to this policy. If you wish to operate or sell such a game in Second Life, you will need to apply for approval.
If you would like to verify that a purchased game of skill has been approved by Linden Lab, please check the list of approved creators and their approved games of skill. Play games. Watch movies. Have sex. The mainstream press has struggled with how to characterize Second Life.
But what is it, really? Linden Lab, the company that created the platform that is Second Life, is emphatic that their creation is not a game. There are no monsters to kill, no real objective to speak of. But the grow-your-own quality of these games resonated with players. The goal is simple: Players enter a multiplayer online world and go on quests alone or with other people.
To feed the insatiable demand for more characters, more levels and more weapons, Blizzard employs a flotilla of designers, artists, animators and programmers. Goza falls into the latter camp. But she wanted to customize her game-playing experience, and she knew other people felt the same way.
And yes, she gets it. She turned it into a sweeping, palm-tree studded oasis for her friends and Second Life newbies. She and friend Lucius Templar created a movie theater, an art gallery, an amusement park and a shopping center for Djork.
The residents who visit each month — and there are thousands of them — spend time snorkeling, shopping, fishing and belly-dancing. The lodge with the llamas outside? Created by a resident. The cool animation that can change your awkward, new-avatar gait into the feline prowl of a supermodel? Motivations vary. It was also a way to gather up her favorite things in Second Life and make them permanent. For love or money?
0コメント