Amazon Is Hiring to Build an “Advanced” and “Magical” AR/VR Product, by Samuel Axon

“Amazon plans to join other tech giants like Apple, Google, and Meta in building its own mass-market augmented reality product, job listings discovered by Protocol suggest.”

“The numerous related jobs included roles in computer vision, product management, and more. They reportedly referenced ‘XR/AR devices’ and ‘an advanced XR research concept.’ Since Protocol ran its report on Monday, several of the job listings referenced have been taken down, and others have had specific language about products removed.”

“Google, Microsoft, and Snap have all released various AR wearables to varying degrees of success over the years, and they seem to be still working on future products in that category. Meanwhile, it's one of the industry's worst-kept secrets that Apple employs a vast team of engineers, researchers, and more who are working on mixed reality devices, including mass-market consumer AR glasses. And Meta (formerly Facebook) has made its intentions to focus on AR explicitly clear over the past couple of years.”

"It's not all that surprising that Amazon is chasing the same thing. As Protocol notes, Amazon launched a new R&D group led by Kharis O'Connell, an executive who has previously worked on AR products at Google and elsewhere.”

“But Amazon's product might not be the same kind of product that we know Meta and Apple have focused on; it might not be a wearable at all. Some of Amazon's job listings refer to it as a "smart home" device. And Amazon is among the tech companies that have experimented with room-scale projection and holograms instead of wearables for AR.”

Click here for the full article

Deep Science: Combining Vision and Language Could Be the Key to More Capable AI, by Kyle Wiggers

“Depending on the theory of intelligence to which you subscribe, achieving ‘human-level’ AI will require a system that can leverage multiple modalities — e.g., sound, vision and text — to reason about the world. For example, when shown an image of a toppled truck and a police cruiser on a snowy freeway, a human-level AI might infer that dangerous road conditions caused an accident. Or, running on a robot, when asked to grab a can of soda from the refrigerator, they’d navigate around people, furniture and pets to retrieve the can and place it within reach of the requester.”

“Today’s AI falls short. But new research shows signs of encouraging progress, from robots that can figure out steps to satisfy basic commands (e.g., ‘get a water bottle’) to text-producing systems that learn from explanations. In this revived edition of Deep Science, our weekly series about the latest developments in AI and the broader scientific field, we’re covering work out of DeepMind, Google and OpenAI that makes strides toward systems that can — if not perfectly understand the world — solve narrow tasks like generating images with impressive robustness.”

“AI research lab OpenAI’s improved DALL-E, DALL-E 2, is easily the most impressive project to emerge from the depths of an AI research lab. As my colleague Devin Coldewey writes, while the original DALL-E demonstrated a remarkable prowess for creating images to match virtually any prompt (for example, ‘a dog wearing a beret’), DALL-E 2 takes this further. The images it produces are much more detailed, and DALL-E 2 can intelligently replace a given area in an image — for example inserting a table into a photo of a marbled floor replete with the appropriate reflections.”

“Another component is language understanding, which lags behind in many aspects — even setting aside AI’s well-documented toxicity and bias issues”

“In a new study, DeepMind researchers investigate whether AI language systems — which learn to generate text from many examples of existing text (think books and social media) — could benefit from being given explanations of those texts.”

Click here for the full article

From Virtual Reality Afterlife Games to Death Doulas: Is Our View of Dying Finally Changing?, by Sara Moniuszko

“Everyone dies. And while it will happen to all of us, we rarely talk about it with ease. But that's starting to change, from video games about the afterlife to TV shows that help prepare you to pass.”

“Although death remains a painful and mysterious part of life, experts say new technologies, grieving options and professions related to end-of-life care are shifting society's comfort levels around discussing it.”

“‘If we don't talk about (death and loss), we're basically ignoring it. By ignoring this fact, it only serves to deepen the pain,’ explains Ron Gura, co-founder and CEO of Empathy, a platform that helps families navigate the death of loved ones.”

“We use apps every day to connect with friends and order food, but we didn't have one to help us through challenging moments like loss and grief – until now.”

“Technology can not only help people through the process of loss, but also spark conversations around death, explains Dr. Candi Cann, Baylor University death scholar and researcher.”

“She's been tracking the rise of the intersection of mourning and gaming, pointing to a virtual reality game released last year called ‘Before Your Eyes,’ which shows the perspective of a soul's journey on its way to the afterlife.”

Click here for the full article

VR Role-Play Therapy Helps People With Agoraphobia, Finds Study, by Nicola Davis

“It’s a sunny day on a city street as a green bus pulls up by the kerb. Onboard, a handful of passengers sit stony-faced as you step up to present your pass. But you cannot see your body – only a floating pair of blue hands.”

“It might sound like a bizarre dream, but the scenario is part of a virtual reality (VR) system designed to help people with agoraphobia – those for whom certain environments, situations and interactions can cause intense fear and distress.”

“Scientists say the approach enables participants to build confidence and ease their fears, helping them to undertake tasks in real life that they had previously avoided. The study also found those with more severe psychological problems benefited the most.”

“The VR experience begins in a virtual therapist’s office before moving to scenarios such as opening the front door or being in a doctor’s surgery, each with varying levels of difficulty. Participants are asked to complete certain tasks, such as asking for a cup of coffee, and are encouraged to make eye contact or move closer to other characters.”

Click here for the full article

Virtual Reality and Motor Imagery With PT Ease Motor Symptoms, by Lindsey Shapiro

“Combining a 12-week program of virtual reality (VR) training and motor imagery exercises with standard physical therapy (PT) significantly lessened motor symptoms — including tremors, slow movements (bradykinesia), and postural instability — among people with Parkinson’s disease, according to a recent study.”

“‘To the best of our knowledge, this is the first trial to show the effects of … virtual games and [motor imagery] along with routine PT on the components of motor function such as tremors, posture, gait, body bradykinesia, and postural instability in [Parkinson’s disease] patients,’ the researchers wrote.”

“Recent evidence suggests that virtual reality training — using video game systems — may be a highly effective supplement to physical therapy, with the ability to improve motor learning and brain function. VR training also has been shown to boost attention, self-esteem, and motivation, as well as increase levels of the brain’s reward chemical, dopamine, which may increase the likelihood of participation and therapy adherence.”

“Motor imagery training, known as MI, is a process in which participants imagine themselves performing a movement without actually moving or tensing the muscles involved in the movement. It is thought that such activity strengthens the brain’s motor cortex, and also may be a promising therapeutic approach in Parkinson’s.”

“Now, a team of researchers in Pakistan examined whether a combined approach of VR and MI could lessen disease symptoms in people with Parkinson’s. Patients ages 50 to 80, with idiopathic Parkinson’s — whose disease is of unknown cause — were recruited from the Safi Hospital in Faisalabad.”

“Despite also being limited by a small sample size, the results overall suggest that VR together with MI training and routine physical therapy ‘might be the most effective in treating older adults with mild-to-moderate [Parkinson’s disease] stages,’ researchers concluded.”

Click here for the full article

Museum Launches Dead Sea NFT Photo Collection On World Water Day, By Simona Shemer

“The Dead Sea Museum, a physical art museum planned to be built in the city of Arad, has launched its first NFT collection of Dead Sea photographs by environmental – art activist Noam Bedein, founder of the Dead Sea Revival Project.”

“The collection of 100 selected images, dubbed Genesis NFT, highlights the disappearing beauty of the Dead Sea in order to raise environmental awareness for the world wonder, a statement from the museum said.”

“Bedein is the first to document the Dead Sea World Heritage Site solely by boat and has a database of over 25,000 photographs from the past six years, showing fascinating hidden layers exposed at the lowest point on Earth, and revealed due to the drop in sea level, which is currently at its lowest level in recorded history.”

“The auction will start on March 22nd, which is World Water Day, the annual observance established by the United Nations that highlights the importance of freshwater and advocates for its sustainable management. It will end on April 22, which is Earth Day. All proceeds from the sale will be used for the museum planning and for legal and legislative efforts to restore water to the Dead Sea.”

“Preservation of the Dea Sea is important because ‘it holds a wealth of resources that help people and the planet, and it is deeply rooted in the vast history of this land and the people of Israel,’ Fruchter explains.”

Click here for the full article

Want to Make Robots Run Faster? Try Letting AI Take Control, by James Vincent

“Quadrupedal robots are becoming a familiar sight, but engineers are still working out the full capabilities of these machines. Now, a group of researchers from MIT says one way to improve their functionality might be to use AI to help teach the bots how to walk and run.”

“Usually, when engineers are creating the software that controls the movement of legged robots, they write a set of rules about how the machine should respond to certain inputs. So, if a robot’s sensors detect x amount of force on leg y, it will respond by powering up motor a to exert torque b, and so on. Coding these parameters is complicated and time-consuming, but it gives researchers precise and predictable control over the robots.”

“An alternative approach is to use machine learning — specifically, a method known as reinforcement learning that functions through trial and error. This works by giving your AI model a goal known as a ‘reward function’ (e.g., move as fast as you can) and then letting it loose to work out how to achieve that outcome from scratch. This takes a long time, but it helps if you let the AI experiment in a virtual environment where you can speed up time. It’s why reinforcement learning, or RL, is a popular way to develop AI that plays video games.”

“Margolis and Yang say a big advantage of developing controller software using AI is that it’s less time-consuming than messing about with all the physics. ‘Programming how a robot should act in every possible situation is simply very hard. The process is tedious because if a robot were to fail on a particular terrain, a human engineer would need to identify the cause of failure and manually adapt the robot controller,’ they say.”

Click here for the full article

A Brief Tour of the PDP-11, the Most Influential Minicomputer of All Time, by Andrew Hudson

“The history of computing could arguably be divided into three eras: that of mainframes, minicomputers, and microcomputers. Minicomputers provided an important bridge between the first mainframes and the ubiquitous micros of today. This is the story of the PDP-11, the most influential and successful minicomputer ever.”

“The PDP-11 was introduced in 1970, a time when most computing was done on expensive GE, CDC, and IBM mainframes that few people had access to. There were no laptops, desktops, or personal computers. Programming was done by only a few companies, mostly in assembly, COBOL, and FORTRAN. Input was done on punched cards, and programs ran in non-interactive batch runs.”

“Although the first PDP-11 was modest, it laid the groundwork for an invasion of minicomputers that would make a new generation of computers more readily available, essentially creating a revolution in computing. The PDP-11 helped birth the UNIX operating system and the C programming language. It would also greatly influence the next generation of computer architectures. During the 22-year lifespan of the PDP-11—a tenure unheard of by today’s standards—more than 600,000 PDP-11s were sold.”

“…the PDP-11 helped popularize the interactive computing paradigm we take for granted today. If you’re looking for a single device that best represents the minicomputer lineage, the PDP-11 is it.”

Click here for the full article

How Is the Use of Virtual Reality in Architecture Becoming Increasingly More Significant?, by Jullia Joson

“The importance of the use of advanced technologies, such as the likes of virtual reality in the scene of architecture, is becoming increasingly necessary. No matter how beautiful a rendered image may be, it will always lack the capacity to fully convey the scope and feel of a project as a whole, further perpetuating the necessity to incorporate the use of these technologies at a professional practice level.”

“Architects who choose not to adopt the use of virtual reality technologies into their design process fall victim to being at a significant disadvantage, and the problem no longer even lies within accessibility, as VR is very much a possibility for architects of all backgrounds in the present age.”

“Head-mounted displays (HMDs) such as Oculus Rift, house the capacity to change how architects and designers create and communicate their ideas long before structures are actually even built. Clients can easily be transported into three-dimensional representations of the working design to further bring themselves into a state of immersion, almost akin to the emotions evoked when you engage within a virtual built environment within the likes of video games.”

“Virtual worlds aim to temporarily transport consumers to another reality, a well-constructed environment that can transmit subtle things, such as emotions, feelings, and sensations, therefore, if clients are able to experience the influx of those emotions prior to physically standing in the building, it opens opportunities for changes to be made before committing to a build.”

“The use of an immersive representation allows an opportunity for greater immediate understanding and comprehension of these design elements, as opposed to just looking at a scale model or visual render.”

Click here for the full article.

Walmart Launches AI-Powered Virtual Clothing Try-On Technology for Online Shoppers, by Sarah Perez

“Last May, Walmart announced its acquisition of the virtual clothing try-on startup Zeekit, which leveraged a combination of real-time image processing, computer vision, deep learning and other AI technologies to show shoppers how they would look in an item by way of a simulation that takes into account body dimensions, fit, size and even the fabric of the garment itself. Today, Walmart says it’s bringing that technology to Walmart.com and its Walmart mobile app.”

“The retailer is introducing the computer vision neural network-powered “Choose My Model” try-on feature, now in beta, which will now allow Walmart customers to select a model that better matches their own appearance and body type. At launch, online shoppers will be able to choose from among 50 different models to find one who best reflects their own skin tone, height and body shape so they can get a better idea of how clothing would look on them.”

“During tests, Walmart said it received positive customer feedback about the experience, which it hopes will make online clothes shopping feel more like in-person shopping.”

Click here for the full article