Get it on Google Play تحميل تطبيق نبأ للآندرويد مجانا

Qualcomm-Powered Razer Handheld Game Console Leaks


The Nintendo Switch has proven that handheld game machines have not been totally subsumed by smartphones. Valve is planning to release the Steam Deck, although the timeline has been pushed back. Not to be outdone, Razer is prepping the release of a Qualcomm-powered gaming handheld known as the Snapdragon G3X Handheld Developer Kit. As the name implies, this device will be aimed at developers, but it could lead to a new generation of Android-powered game machines. 

The device in question was leaked to VideoCardz, so we don’t have an official launch date or info on availability. As a developer device, support will probably be limited, but Qualcomm has a good track record of making its developer hardware available for purchase generally. Presumably, Razer would like the chance to sell units as well. However, it’s also possible this device will only exist in small numbers. 

We do know from the leak that the handheld will have the Snapdragon 8 Gen 1, which is a next-gen ARM chip Qualcomm will most likely announce in the coming weeks. That system-on-a-chip will reportedly have 20 percent improved CPU performance and a 30 percent power efficiency boost. The GPU is supposed to be 30 percent faster and 25 percent more efficient. Being inside a large handheld device like the Razer unit also means it can have a more robust cooling solution for better overall performance. 

The leak does not include the display size, but it looks at least as large as the Switch OLED, which has a 7-inch screen. VideoCardz reports that the display is OLED and has a 120Hz refresh rate with HDR support. Powering it all is a 6,000mAh battery, which is substantially larger than most smartphones. Since this device exists to show off the best of Qualcomm’s new chip, it’ll have support for 5G (sub6 and mmWave), Wi-Fi 6E, and Bluetooth 5.2. 

Razer doesn’t have the best track record when it comes to supporting its hardware. Remember the Razer Phone? The Forge Android TV box that never got Netflix support? There are few Android games that warrant a device this powerful, but perhaps Qualcomm could help coax developers to create new content if the Snapdragon G3X Handheld Developer Kit evolves beyond a developer tool. There are also some hints it’ll be pushed as an ideal way to experience cloud gaming platforms like Stadia and Game Pass. 

Qualcomm is currently hosting its annual Snapdragon Summit, so we might get details on the new handheld very soon.

Now read:

December 1st 2021, 12:51 pm

Apple Reportedly Working on Long Distance Wireless and Reverse Charging Tech


Apple just released the iPhone 13 this month, along with the new iPad Mini and Apple Watch 7. (Photo: Apple)

It’s quite rare for Apple to announce a new technology at one of its hardware launch events, then scuttle it afterwards, but that’s exactly what happened with its much-hyped AirPower charging mat. The tantalizing mat would have been able to charge three Apple devices at once: an iPhone, Apple Watch, and AirPods, but alas it has never seen the light of day. Though it was once listed on the Apple website, reports emerged that not even the wizards at Apple could figure out how to make the darn thing work, but according to an Apple beat reporter Mark Gurman via ArsTechnica, Apple might have something even better in the works.

In his latest newsletter, Gurman is speculating that Apple is still working on a three-device charger, which it first announced way back in 2017 with the launch of the iPhone X, but apparently is also aiming even higher than a simple mat you drop your Apple products onto. He reports the company is currently tinkering with “true wireless charging,” meaning charging devices from a distance instead of having to put your device on a very specific area of a mat, as well as reverse-charging, meaning one device could charge another.

Apple’s AirPower never launched, but the company is reportedly still working on similar technology. (Image: Apple)

According to 9to5Mac, Gurman writes, “I do think Apple is still working on some sort of multi-device charger that it intends to eventually release. There’s a reason why it planned to launch the device in the first place in 2017. I also believe Apple is working on short and long distance wireless charging devices and that it imagines a future where all of Apple’s major devices can charge each other. Imagine an iPad charging an iPhone and then that iPhone charging AirPods or an Apple Watch.”

This is not the first time we’ve heard these types of rumors about Apple’s future charging plans, as famed analyst Ming-Chi Kuo thought Apple would be adding bilateral charging to its iPhone lineup all the way back in 2019. This would have allowed a phone to charge a device placed on its back, but obviously it never came to fruition. Interestingly, Samsung does allow this type of charging on many of its Smartphones, but Apple never has.

These are all intriguing concepts, and as always it’s interesting to see what Apple comes up with when it’s either entering a product category for the first time or attempting to offer a better solution than what exists currently. However, given the fact that it’s been four years since we first saw the AirPower mat and Apple still can’t make a mat that charge three devices at once, we’re going to guess the wireless charging technology should be available soon, just in time for the iPhone 27 launch.

Now Read:

December 1st 2021, 12:51 pm

Patient Receives First 3D Printed Prosthetic Eye


(Photo: Moorfields Eye Hospital NHS Foundation Trust)
A patient in London has received the first 3D printed prosthetic eye. National Health Service patient Steve Verze visited Moorfields Eye Hospital last week to receive the new left eye after it had been printed by Fraunhofer IGD, an international institute for applied research in visual computing.

To print the eye, Fraunhofer devised a process that would cover every step of the patient experience, from taking a patient’s measurements and matching the design to the healthy eye all the way to printing the actual prosthetic. While a traditionally-produced prosthetic eye can take months to reach its recipient, a prosthetic made using Fraunhofer’s technology can be completed in half the time. The 3D printed eye also appears more realistic, which is a contributing factor to many patients’ comfort.

“The closer you get to the real thing makes me feel more and more confident,” said Verze to On Demand News. “If I can’t spot the difference, I know other people can’t spot the difference.”

The method by which the eye is fitted is unconventional, too. Rather than make a wax and alginate mold of the patient’s empty eye socket—an uncomfortable and invasive route that only sometimes involves the patient undergoing anesthesia—Fraunhofer Technology elected to work with Occupeye Ltd, a British sensor company that modified an ophthalmic scanner for the project. The scanner enables Fraunhofer to measure a patient’s eye socket within 2.4 seconds. 

Patient Steve Verze after receiving the prosthetic eye. (Photo: Moorfields Eye Hospital NHS Foundation Trust)

After obtaining the measurements of the empty eye socket, Fraunhofer captures a color-calibrated image of the patient’s healthy eye and combines the data in a system called “Cuttlefish:Eye,” which produces a 3D print model for the prosthetic. Fraunhofer then prints the model using a multicolor, multimaterial 3D printer.

The technology lends new hope to those who have experienced the loss of an eye, whether from a serious disease like eye cancer or following a traumatic injury. “Having major surgery to remove an eye, for example, is psychologically very challenging. So being able to give a realistic-looking prosthetic eye to the patient as quickly as possible . . . is a big advantage,” said Mandeep Sagoo, ophthalmic surgeon at Moorfields Eye Hospital. In Fraunhofer’s press release, Sagoo added, “[The technology] clearly has the potential to reduce waiting lists.” 

Fraunhofer’s technology represents the first development in prosthetic eye manufacturing for several decades. Such developments haven’t been made for decades; now, an eye can be 3D printed right alongside other healthcare elements, like respirator valves and multi-chamber pills.

Now Read:

December 1st 2021, 12:51 pm

Legislators Reintroduce Bill to Put an End to Online “Grinch” Bots


(Photo: CashOut Kings on Twitter)
Democratic lawmakers have reintroduced a bill in Congress that hopes to prevent the use of so-called “grinch bots” to effectively rob little boys and girls (and PC builders) of their precious Christmas toys, according to PCMag. These dastardly bots are automated technology that can purchase all of the stock of in-demand gadgets. You might have noticed their presence over the past year or so due to the fact that you can’t simply buy almost anything related to technology or fun any more, and the reason is because of bots: software programs that check e-commerce sites every few seconds for in-stock items, then grab them and complete the checkout process faster than any human could possibly do it.

Dubbed the Stopping Grinch Bots Act, this legislation would do the following:

  1. Prohibits manipulative work arounds that allow bad actors to use bots to circumvent control measures designed to protect real consumers.
  2. Makes it illegal to knowingly circumvent a security measure, access control system, or other technological control or measure on an Internet website or online service to maintain the integrity of posted online purchasing order rules for products or services, including toys, and would make it illegal to sell or offer to sell any product or service obtained in this manner.
  3. Allows the Federal Trade Commission to treat these abusive workarounds as prohibited unfair or deceptive acts or practices and take action against the bad actors.

The bill, which is sponsored by US Rep. Paul Tonko (D-NY) and Sens. Richard Blumenthal (D-CT), Chuck Schumer (D-NY), and Ben Ray Luján (D-NM), was actually introduced a year ago, but has seemingly stalled in committee since that time. With the holidays fast approaching, and RTX 3080 GPUs and Playstation 5s still impossible to purchase, the legislators are stepping up their efforts to get the bill passed for real this time.

You can rent a bot like this one, called Stellar, if you want to be part of the problem. (Image: Stellar)

In a statement seemingly referencing the pain experienced by DIY PC builders, Democratic Majority Leader Chuck Schumer wrote, “After a particularly trying year, no parent or American should have to fork over hundreds—or even thousands—of dollars to buy Christmas and holiday gifts for their children and loved ones.”

The bill would copy many of the attributes of an earlier bill passed in 2016 to prevent bots from purchasing tickets for public events such as concerts and sporting events. The BOTS Act of 2016 made it illegal for bots to attempt to circumvent measures put in place to prevent such activity in the first place, and granted the FTC the authority to enforce the law, with violations treated as unfair or deceptive acts or practices. The new “grinch” bill states that it, “…would apply the structure of the BOTS Act to e-commerce sites, to ban bots bypassing security measures on online retail sites.”

Given the fact that they tried to pass this bill a year ago, and the problem has only gotten worse since then, call us skeptical there will be any movement this time around. Even companies like AMD have attempted to stop bots from buying all its partners’ GPUs, seemingly to no avail. That said, the FTC has gone after people under the previous act, requiring three New York-based ticket brokers to pay a $3.7 million dollar fine earlier this year for buying up to 150,000 tickets for public events. Also, the ongoing supply chain issues, chip shortages, and ever-rising GPU prices might, just might, spur the legislators into action this time around. Still, we won’t be holding our breath.

Now Read:

December 1st 2021, 12:51 pm

Microsoft Under Fire for Adding Short-Term Loan Offers to Edge


Microsoft has had a heck of a time getting people to use its Edge browser, and it’s not for lack of trying. It spammed Windows 10 users, revamped the browser with a Chromium base, and has even made it more annoying to change your default browser in Windows 11. Edge’s unpopularity isn’t entirely Microsoft’s fault, but it’s not doing itself any favors with the latest change. Microsoft now plans to integrate a purchase financing service called Zip in Edge. But do we really need our browsers selling us loans?

Microsoft made the announcement for “Buy now, pay later” on the Edge insider forums earlier this month, and the response was swift and brutal. Most commenters expressed annoyance that Microsoft would integrate this feature with Edge instead of simply making it an optional extension. They threw around terms like “bloatware” and “cash grab” to describe the move. 

Should this feature go live, users will see the Zip option when they click in payment fields (right next to any saved payment methods). You could call this an “ad” if you’re feeling charitable, but it’s a little more troubling than your average ad — this is a browser offering financial services that can come with serious consequences. 

Zip lets you split a purchase between $35 and $1,000 into four monthly payments. Rather than offering an interest rate like most loans, users simply pay a $1 fee with each payment. So, that’s $4, which is a good deal if your purchase is on the large side. However, it’s a very high interest rate if the purchase is smaller — as much as 11 percent. In addition, you can get hit with a few extra dollars in late fees if you miss a payment, and non-payment will get you sent to collections and hit with negative marks on your credit report. It’s just like a regular loan in that respect. 

Calling Zip “interest-free” seems a bit disingenuous.

Microsoft says it won’t get a cut of sales from Zip, which actually makes this decision even more confounding. It’s not that I want Microsoft to cash in, but why do this at all if it’s not about money? Is there someone on the Edge team who genuinely thinks users will want this? Zip has existed as a standalone service for years (it used to be called Quadpay). If people want this option, they’ll seek it out. It’s also possible the wording on Microsoft’s support page is misleading while not technically wrong. Perhaps Zip paid Microsoft something up front for access to millions of new customers? I have a hard time believing no money is changing hands here. 

Zip integration is currently live in the Canary and Dev channels of Edge. The plan is to make it available to all users in Edge version 96, which is the current version. So, Zip loans should come to the stable channel soon.

Now read:

December 1st 2021, 12:51 pm

Analyst: Apple’s AR Glasses Will be Just as Powerful as a Mac Computer


It’s been rumored for some time now that Apple has been working on some type of Augmented Reality (AR) device to pair with its iPhone, and now we have some spicy new rumors to add to the mix. According to famed analyst and Apple product future-seer Ming-Chi Kuo of TFI Asset Management, Apple is about a year away from launching the glasses, and they will be just as powerful as its low-end MacBook computers thanks to the inclusion of Apple’s vaunted silicon, according to CNBC.

As the report notes, it was previously assumed that Apple’s AR glasses would be similar to current AR offerings in that it would need to be tethered to another device, presumably an iPhone, to handle the actual heavy lifting involved in rendering 3D objects in space. But with the arrival of Apple’s power-sipping M1 silicon, it appears the company is now angling to use newly designed chips similar to the M1 as the brains of the operation. If this rumor pans out, it would indeed allow Apple to distance itself from its rivals in the still-nascent product category, as Kuo states the glasses would be able to operate independently of other devices, such as a Mac computer or iPhone. Kuo has stated that Apple’s goal is to replace the iPhone with AR within 10 years.

One of the promises of AR is this Mercedes feature, with the fishbones, or turning indicators, overlaid exactly where you’re supposed to turn.

Kuo goes on to predict the glasses will use two chips, stating, “The higher-end processor will have similar computing power as the M1 for Mac, whereas the lower-end processor will be in charge of sensor-related computing.” He believes the optical portion of the product will employ dual 4K micro OLED panels from Sony, and that it will be powerful enough to operate as its own independent product separate from the iPhone, with its own ecosystem of software and apps, according to 9to5mac. Kuo has previously stated the Apple glasses will also use Wi-Fi 6, which allows for maximum bandwidth with low latency; a requirement for any enjoyable AR/VR experience.

However, according to reputable Apple reporter Mark Gurman, Apple is actually making a headset first, positioning it as an expensive, niche product that will be followed up in the future with a product that looks more like glasses. Gurman predicts the initial product will be quite premium, with a price tag hovering near $3,000.

Though there’s not much happening in the AR glasses market at the moment, due to the fact that there’s still limited use cases for them and they usually need to be tethered to a powerful computer to function, Apple’s arrival in the market could possibly shake things up a bit, as Apple is wont to do. Apple is known for taking existing technologies and just making them easier to use, so it’s possible that’s what it is intending to achieve here. Either way, this is a product category that is seemingly heating up a bit, especially with Facebook changing its name to Meta, and beginning to talk about the hardware it’s working on to facilitate the eventual transformation of the human race into our avatars.

Now Read:


November 30th 2021, 11:21 am

Xbox Series S Supposedly Black Friday’s Most Popular Console


(Photo: Mika Baumeister/Unsplash)
Every year after Thanksgiving a flurry of steals and deals takes America by storm, and discounts involving the newest gadgets—including pricey, hard-to-find video game consoles—enjoy a special level of anticipation. This Black Friday, the new Xbox Series S has beaten out every other console on the market.

According to the Adobe Digital Economy Index as reported by Business Insider, the Xbox Series S has proven itself the underdog this year, having unexpectedly overcome competitors from Sony, Nintendo, and even from within Microsoft. Based on “over one trillion visits to U.S. retail sites” that occurred during Black Friday 2021, Adobe calculated that the Xbox Series S was the best-seller.

Since its release about a year ago, the Xbox Series S has somewhat lived in the shadow of its fancier, more expensive counterpart, the Series X. While the Series X boasts nearly twice the storage, 4K compatibility, a disc drive, and a more powerful processor, the Series S forces gamers to own all their titles digitally and can’t manage quite the same level of graphical fidelity. The Series S is also smaller and less than half the weight, which can be a pro or a con, depending on one’s priorities. 

Though the Xbox Series X (pictured) is more powerful, the Series S is more affordable—and far easier to find. (Photo: Billy Freeman/Unsplash)

Though the Xbox Series X, PlayStation 5, and Nintendo Switch OLED have hoarded the limelight throughout most of 2021, a limited barrier of entry makes the Xbox Series S a far more accessible option. Hopeful gamers can easily find the Xbox Series S in stores and online, while the other consoles have been rendered tough to find by supply chain issues, like the ongoing global chip shortage. They don’t have to resort to buying from scalpers, who drastically inflate the price of available stock just because they can. There’s also something to be said about the more affordable pricing of the Xbox Series S, which comes in at $300. (The next-cheapest hot console is the $350 Nintendo Switch OLED; prices rise pretty quickly from there.) 

The chip shortage impacting console availability—and that of virtually everything else—isn’t expected to end until at least mid-2022, meaning the Xbox Series S may continue to enjoy the spotlight a little longer. The holidays prove a unique challenge-turned-opportunity, too; though many people try to get their holiday shopping done during Black Friday, the season for giving isn’t over yet, and supply chain issues are anticipated to continue as the year comes to a close. 

Now Read:

November 30th 2021, 11:21 am

AT&T and Verizon Agree to Limit 5G Power to Resolve FAA Standoff


A 5G millimeter wave cell site on a light pole in Minneapolis.

The 5G rollout in the US has been slow and uneven. While some countries had a wealth of mid-band spectrum to use for 5G, pickings were slim in the US. That’s why Verizon and AT&T leaned so heavily on millimeter wave (mmWave) 5G early on. Both carriers are anxious to fire up their newly acquired C-band frequencies. Unfortunately, a disagreement with the Federal Aviation Administration (FAA) delayed those plans. Now, they’ve got a plan that will allow C-band to move forward in early 2022. 

According to a letter sent to acting FCC chair Jessica Rosenworcel and seen by Cnet, the parties have arrived at a compromise. AT&T and Verizon have agreed to limit the transmission power of their C-band equipment nationwide. In addition, they will enforce even more stringent limits on power near regional airports. With these restrictions, the carriers will move forward with the January 5th launch date. 

As wireless spectrum becomes ever more crowded, we’re learning that not all 5G is created equal. mmWave 5G operates at very high frequencies in the tens of gigahertz. These signals have high bandwidth — Verizon says its network can run at up to 2 Gbps — but the drawback is extremely poor range. C-band is lower, between three and four gigahertz. These waves used to be reserved for satellite TV, but improved efficiency means the C-band could be auctioned for 5G. 

AT&T and Verizon both snagged licenses for the upper C-band at 3.7-4GHz. One of the Federal Communications Commission’s (FCC) duties is to prevent wireless interference, and it initially gave the carriers permission to start broadcasting on those frequencies on December 5th. However, this part of the C-band is adjacent to frequencies used for radio altimeters. These devices monitor altitude using radio waves between 4.2 and 4.4GHz and are essential to landing systems. The FAA was worried that signals from Verizon and AT&T networks might leak over and interfere with these vital airwaves. 

The proposed limits on 5G aren’t necessarily in place forever. The carriers have pledged to keep things as they are for a period of six months, at which time regulators can evaluate any potential interference with radio altimeters. It’s unclear how much impact, if any, this change will have on coverage for the new mid-band networks. AT&T and Verizon need every bit of performance they can muster to compete with T-Mobile, which has a huge pile of mid-band spectrum from its Sprint acquisition.

Now Read:

November 30th 2021, 9:51 am

Rumor Mill: Nvidia Prepping Flagship RTX 3090 Ti GPU


If there’s one thing the gaming world needs right now it’s another outrageously expensive and unobtainable graphics card, and Nvidia is heeding the call with reports of an alleged top-tier GPU waiting in the wings. Dubbed the RTX 3090 Ti, this full-blown Ampere card will offer the entirety of the GA102 die’s performance envelope, along with higher clocked memory from Micron to help distance the card from its lowly RTX 3090 predecessor.

The rumor springs forth from Twitter user Uniko’s hardware, who tweets the new GPU will feature new 21Gb/s memory from Micron, which is a bit of a boost from the memory used in the current RTX 3090, which runs at approximately 19.5Gb/s. The memory chips also feature twice the capacity of the previous modules, so only half the number will be required to reach the allotted 24GB of GDDR6X in the new GPU. This reduction in memory chips should allow the card to run a bit cooler, despite those chips requiring more power than the previous model due to the higher clock speeds. The GPU will also keep the same 384-bit bus as the current card, allowing it to theoretically offer up to 1TB/s of memory bandwidth.

Other salient specs include the full allotment of GA102’s horsepower, including 10,752 CUDA cores (up from 10,496), the aforementioned 24GB of a super-fast memory, and a total board power of around 400w or so, which is 50w more than the current RTX 3090. That’s an increase in core count of 2.5 percent, and when you throw in the faster memory, it seems wise to assume that overall the new GPU will be about five percent faster than the previous GPU. There is no information at this time about reported clock speeds, however. WCCFtech is reporting that despite being more powerful than the card it replaces, MSRP should remain the same at $1,499.

If this card does indeed exist, its arrival follows a pattern established by Nvidia in the past, where it released a cut-down version of its biggest die first, then follows up with an unfettered “big” version of the chip at the end of the product’s lifecycle, though previously it was branded Titan. Nvidia seems to have abandoned the Titan naming altogether for some reason, and is now swapping back-and-forth between Super and Ti for upgraded versions of its current GPUs. While it favored Super in the previous Turing era, it has switched to Ti for its Ampere upgrades.

Still, the news of this card’s imminent release begs the question, “Why?” We already have an RTX 3080 Ti, which is very close in specs to the RTX 3090, aside from having half the memory. Not to mention the fact that neither of these GPUs can’t be purchased for anywhere near their MSRP, leaving them costing over $2,000 on third-party sites, assuming you can even find one for sale.

Regardless, this rumored GPU is supposed to break cover in January 2022, perhaps as the company’s big announcement for CES. Go talk to your loan officer now, and watch this space.

Now Read:

November 29th 2021, 3:58 pm

NASA Launches DART Asteroid Deflection Mission


Earth has been pelted by space rocks on a regular basis for the entirety of its existence, and there’s nothing stopping it from happening again. The next big Earth impactor is already out there, and eventually, it’ll make itself known. For the first time, there’s a chance we could stop such an object from clobbering the planet. NASA just launched the DART spacecraft early Wednesday morning (Nov 22) on a mission to test asteroid redirection technology. Next year, it will collide with a space rock called Dimorphos with the aim of changing its orbit. 

NASA partnered with SpaceX to launch DART (Double Asteroid Redirection Test) aboard a Falcon 9 rocket. Its departure from Earth went off without a hitch, too. The spacecraft’s target is a binary asteroid system consisting of Dimorphos (the target) and 65803 Didymos, the larger asteroid around which Dimorphos orbits. These objects cross Earth’s orbit, making them potentially hazardous in the future. 65803 Didymos is almost a kilometer in diameter, so it could cause major devastation if it were to hit Earth. Dimorphos (formerly known informally as Didymoon) has a diameter measured in the tens of meters, small enough that DART might be able to knock it off course. 

Dimorphos and Didymos make an ideal system to study the effectiveness of asteroid redirect technology. NASA will analyze the orbit of Dimorphos before and after the impact to see how it was affected. Dimorphos will become the smallest celestial body ever visited by a human spacecraft, making the mission that much more challenging. The 1,100-pound (500 kilograms) DART needs to line up its attack run and hit Dimorphos at 6.6 kilometers per second, all while the NASA team is 6.8 million miles away—it’s all automated. 

In addition to smashing itself to bits, DART will deploy a small secondary satellite from the Italian Space Agency called LICIACube. This spacecraft will witness DART’s demise from 34 miles (54 kilometers) away and measure the amount of debris kicked up from the impact. 

To judge the success of the mission, astronomers will look to see how fast Dimorphos completes an orbit around Didymos. Currently, the smaller rock orbits the larger one every 11.9 hours. If the orbit speeds up by just 73 seconds, the mission will be deemed a success. Although, some members of the team are expecting a much larger change on the order of ten minutes. If we can manage that with the relatively small DART impactor, it should be possible to nudge a killer asteroid into a safer orbit with similar technology. The real challenge might be spotting the target before it’s too close to redirect. We’ll know if humanity has a new tool to preserve life on Earth sometime next fall.

Now Read:

November 29th 2021, 10:28 am

Apple Files Lawsuit Against NSO Group for its Pegasus Spyware Attacks


Apple talks up iPhone security, but Zerodium says it's falling behind.

(Image: Getty Images)
If you’ve been reading any news related to Cybersecurity in the past few years, you’ve certainly heard the name NSO Group before. The Israeli company has gained notoriety recently for its Pegasus software, which it licenses to governments and other nation-state clients to theoretically monitor terrorists, criminals, etc. However, recent investigations discovered Pegasus was installed on the smartphones of journalists, activists, and business leaders all over the globe. Due to this shocking discovery, Apple has announced a lawsuit against NSO Group and its parent company, seeking to prevent the group from using any of Apple’s services and hardware in the future, and thereby protecting its users from malicious attacks on their personal devices.

For a brief primer, Pegasus is essentially spyware that can be silently deployed against a target and used to monitor everything on a person’s mobile device. According to the filing, the Pegasus software was first identified by researches at Citizen Lab at the University of Toronto, where it was discovered Pegasus could initiate what is known as a “zero-click exploit,” meaning it could deploy without any input from the user. The attack, which Citizen Lab named FORCEDENTRY, worked in several stages. First, the company allegedly contacted Apple’s servers in the US to identify other Apple users, then worked to confirm the target was using an iPhone. Next it sent “abusive data” to the target via iMessage, which disabled logging and allowed it to upload a bigger file, which was the payload. That bigger file was stored on iCloud servers, then delivered to the targets’ phones. One the Pegasus payload was in-place, it began communicating to a command-and-control server, whereby a person could send commands to the phones. This allowed 3rd parties to control the phones remotely, vacuuming up call logs, web browser history, contacts, and even let them turn the phone’s microphone and camera on, and send what it captured back to the nefarious server. A consortium of global journalists launched an investigation in July into this situation, dubbed the Pegasus Project, and found, “Military-grade spyware licensed by an Israeli firm to governments for tracking terrorists and criminals was used in attempted and successful hacks of 37 smartphones belonging to journalists, human rights activists, business executives and two women close to murdered Saudi journalist Jamal Khashoggi.”

Image from an NSO Group brochure posted on SIBAT (The International Defense Cooperation Directorate of the Israel Ministry of Defense). (Image: Citizen Lab)

This seems like pretty standard spyware stuff, but what’s so remarkable about it is the zero-click aspect, as typically a user has to initiate the deployment of malware/spyware by clicking on a link sent to them, or take some kind of action. Not this time. This type of activity is only possible because NSO Group and other companies like it employ researchers who work to discover unknown vulnerabilities in popular software such as iOS, Microsoft Windows, and others, and use these gaps in security to develop software that can penetrate target devices before the developer catches on that there’s a flaw. The security holes are typically known as Zero Days, because the developer has had zero days to fix the security flaw. Companies like Apple, Microsoft, Google and others have massive cyber security teams of their own who work to find these security flaws before rogue actors do, but given the complexity of the software involved, it’s a never-ending battle against companies like NSO Group. Also, in September Apple patched the vulnerabilities that allowed Pegasus to run with its iOS 14.8 update, and in its press release the company notes, “Apple has not observed any evidence of successful remote attacks against devices running iOS 15 and later versions.”

This is not the first time NSO Group has been in the headlines. The US government blacklisted the company earlier this month, “after determining that its phone-hacking tools had been used by foreign governments to ‘maliciously target’ government officials, activists, journalists, academics and embassy workers around the world,” according to The Post. The company is also embroiled in a lawsuit with WhatsApp over claims its spyware was used to hack 1,400 users of its app. Earlier this month, the Ninth Circuit Court of Appeals rejected NSO Group’s claim that it should have “sovereign immunity” in the case.

If you’re interested in a deep-dive on the NSO Group, the podcast Darknet Diaries recently posted an episode about it, including an interview with the Citizen Lab researchers that discovered Pegasus. You can also read Apple’s full complaint right here.

Now Read:

November 29th 2021, 10:28 am

El Salvador Wants to Build a Volcano-Powered Crypto City


(Photo: Harrison Kugler/Unsplash)
The world’s first “crypto city” is coming to El Salvador, and its cryptocurrency focus isn’t even the only thing that makes it exciting. With a good dose of fanfare, El Savador President Nayib Bukele announced this month that beginning in 2022, the country would be building a “bitcoin city” at the base of Conchagua, an oceanside volcano. 

The city will be backed by bitcoin-backed bonds with construction beginning 60 days after funding. During his announcement at the end of a weeklong bitcoin promotional event, Bukele postured the city as an ideal place for both residents and tourists, with restaurants, shops, homes, public transit, an airport, and other standard resources included. There will also be a central plaza in the shape of the bitcoin logo. 

But crypto-mining uses so much energy that whole countries’ electrical consumption pales in comparison, thanks to round-the-clock complex mathematical calculations that unlock new tokens. How will El Salvador’s new city support the vast amounts of power consumed by its reliance on bitcoin? That’s where the volcano comes in. As cool and alluring as a city at the base of a volcano may be, Conchagua plays a practical purpose: to power processing and verifying events on the blockchain with geothermal energy. This involves a system that will extract hot groundwater and convert it to steam, which will in turn power turbines and create energy.

Volcano San Vicente in El Salvador. (Photo: Oswaldo Martinez/Unsplash)

El Salvador already supports a quarter of its electrical grid with geothermal energy, which it harvests from a handful of the country’s 20 active volcanoes. In October its government set up a prototype for the crypto city’s system at the base of Tecapa volcano, though the results haven’t been particularly impressive so far. Geothermal energy production, while far “cleaner” than production involving fossil fuels, also impacts natural habitats surrounding the sites of groundwater extraction.

This isn’t the first time El Salvador’s government has aggressively pursued a bitcoin “first,” having made bitcoin legal tender alongside the US dollar and requiring that all businesses accept the cryptocurrency earlier this fall. (Whether El Salvadorian citizens like bitcoin’s sudden prevalence depends on who you ask.) These choices have prompted the rest of the world to question whether popular cryptocurrencies have a place in everyday business transactions. Alex Hoeptner, CEO of cryptocurrency exchange Bitmex, predicts at least five countries will accept bitcoin as legal tender by the end of 2022—but it’s unlikely these countries will have crypto cities powered by volcanoes. 

Now Read:

November 29th 2021, 10:28 am

AMD Allegedly Jacking Up RX 6000 GPU Prices by 10 Percent


Here’s some more bad news for gamers; that outrageously priced GPU you can’t find in stock is about to get a little bit more expensive. Back in August, TSMC began warning its partners that wafer costs would be going up soon, and we now have the first semi-official confirmation that is indeed happening. A post on the Board Forums alleges AMD has sent a notice to all of its Add-in Board Partners (AIB) that it’s increasing the price of its RX 6000 series GPUs by 10 percent across the board, according to Videocardz. This pricing change will apparently occur in the next shipment of GPUs to its partners, which will apparently drive up the price of these GPUs by $20 to $40 USD. This news arrives just in time for the holiday shopping season, when demand for GPUs is expected to increase even more, as if that is even possible.

According to a translation of the board posting, AMD is citing TSMC wafer costs as the reason for the change, and as we reported earlier, sub-16nm prices, including 12nm, 7nm, and 5nm, are said to have increased roughly 10 percent, while TSMC’s older nodes have gone up by as much as 20 percent. AMD seems to be passing this price increase along to its partners, who in turn are passing it along to us, the customer, or the scalper, as it were. Then the scalper passes it along to us, the gamers. Although, as Videocardz points out AMD also produces its CPUs at TSMC and there hasn’t been a similar across-the-board increase, which is curious.

Overall, this is just another blow for gamers hoping to find a GPU at a reasonable price any time soon, and it’s also not surprising either. Not only did TSMC announce its prices were going up a few months ago, but every single bit of news lately regarding the global supply chain, GPU pricing, and the cost for anything that allows us to have any fun has been along the lines of, “the problem is not going to get better, it’s only getting worse.” We reported in September that GPU prices were rising again after a brief lull in August, adding that companies like Nvidia were indicating the current shortage (and associated price increases) might last at least until this time next year.

Trying to find a GPU for sale that is anywhere near MSRP is a fool’s errand these days.

To its credit, AMD does theoretically sell cards at MSRP direct from its website, but we’ve never actually seen one in stock there, although it’s not like we check it daily. Nvidia used to sell its Founder’s Edition cards directly from its website as well, but that situation was such a debacle that it now just directs customers to channel partners like Newegg and Best Buy so they can see for themselves if the cards are in-stock (they’re not).

Now Read:

November 29th 2021, 10:28 am

Rolls-Royce ‘Spirit of Innovation’ All-Electric Aircraft Smashes World Records in Latest Flight


Rolls-Royce has been working on an all-electric plane, called “Spirit of Innovation,” and the automaker reports that the aircraft absolutely clobbered at least three world records in their latest test flights. For an aircraft that’s only been in the air for a few hours total, that’s pretty impressive — this report comes not quite two months after the aircraft took off for its maiden flight.

After the flights, the company announced: “We have submitted data to the Fédération Aéronautique Internationale (FAI) – the World Air Sports Federation who control and certify world aeronautical and astronautical records – that at 15:45 (GMT) on 16 November 2021, the aircraft reached a top speed of 555.9 km/h (345.4 mph) over 3 kilometres, smashing the existing record by 213.04 km/h (132mph).”

They go on to say that the day’s later test flights also belong in the record books. While they didn’t push the rest of the later flights quite as fast, at least one still hit 330mph. They also report breaking the fastest time to climb to 3km altitude by an entire minute, clocking in at a final 202 seconds, as well as breaking two other speed records over distances of three and fifteen km, respectively. The flights took place at the UK Ministry of Defense’s Boscombe Downs aerodrome: an airfield not unlike Edwards AFB, used for testing new and experimental aircraft.

Spirit of Ingenuity uses liquid-cooled Li-ion batteries, and a 400kW power train developed with partners Electroflight and YASA, also of the UK. The single-seat aircraft has an ultralight carbon-fiber hull, and while it boasts the ability to put forth 500+ hp, it can land with two of its three batteries disabled.

While this one plane won’t revolutionize the industry, it does provide much-needed data for the idea of urban and commuter aircraft. “The characteristics that ‘air-taxis’ require from batteries,” said Rolls-Royce, “are very similar to what was developed for the Spirit of Innovation.”

The company added, “The advanced battery and propulsion technology developed for this programme has exciting applications for the Advanced Air Mobility market.”

There’s even a practical angle to the development of all-electric aircraft. The UK’s Business Secretary, Kwasi Kwarteng, said: “The government is proud to back projects like this to leverage the private investment necessary to unlock cleaner, greener aircraft which will allow people to fly as they do now, but in a way that cuts emissions.” R-R CEO Warren East added, “Following the world’s focus on the need for action at COP26, this is another milestone that will help make ‘jet zero’ a reality and supports our ambitions to deliver the technology breakthroughs society needs to decarbonise transport across air, land and sea.”

Now Read:

November 24th 2021, 1:31 pm

Report: Windows on ARM Is Exclusive to Qualcomm, But Not For Much Longer


Qualcomm support could give Windows a boost going forward.

For years, Windows PCs have flirted with ARM processors, but none of the hardware was good enough to compete with x86 chips from Intel and AMD. Now, we might know why. According to a new report from XDA Developers, Microsoft has an exclusivity agreement with Qualcomm that prevents other vendors’ ARM designs from integrating with Windows. That’s the bad news. The good news is it will expire soon. 

Microsoft and Qualcomm teamed up in 2016 to launch Windows 10 with support for ARM. Unfortunately, the difficulty of making software run well on Windows for ARM has impeded growth. This wasn’t even Microsoft’s first attempt at supporting ARM. In the Windows 8 era, it launched Windows RT for 32-bit ARM chips. It was a spectacular disaster, owing to the lack of app support. Newer chips from Qualcomm didn’t really help, either. The available hardware, like the Snapdragon 7c and the Microsoft-exclusive SQ1 leaned on Qualcomm’s mobile chip designs — they just didn’t have enough power for Windows. 

According to XDA’s source, the Microsoft x Qualcomm collab guaranteed the latter a period of exclusivity for Windows on ARM. That explains why we’ve only seen a handful of Qualcomm-powered Windows machines despite years of effort. It could also shed light on why there is still no way to run Windows 11 on the new M1-based Apple machines. 

Apple’s M1 chip proves ARM is ready for real computers.

We don’t know how long that deal was supposed to last or when exactly it will run its course, but we are assured it’s not too far off. That matches what we’re seeing in the market. MediaTek is gearing up to launch its own high-end ARM chips for Windows. Meanwhile, Qualcomm recently talked about its new generation of CPU designs from the newly purchased Nuvia team. These cores are designed for PCs, so they should be much more powerful when used in notebooks. 

These companies have made it clear they feel the transition from Intel to ARM is inevitable. Apple has already shown that ARM is a viable architecture for Windows, and perhaps, Qualcomm’s exclusivity has only served to slow down the transition on Microsoft’s side. It might be a good time to nurse along your ailing Windows laptop. There could be big changes afoot in the coming year, provided that mysterious exclusivity times out. Five years seems like plenty of time for Qualcomm to make a go of things.

Now Read:

November 24th 2021, 1:31 pm

Super-Powered Gameboy Advance Runs PS1 Games


If finding a way to play PlayStation 1 games on a Gameboy Advance was one of your 2021 resolutions, you’re in luck. A modder by the name of Rodrigo Alfonso has hacked a cartridge for the 20-year-old handheld, allowing them to play 3D games like Crash Bandicoot and Spyro: Year of the Dragon on a console that was never made to do so.

Alfonso hacked the Gameboy Advance by moving its game processing and rendering to a custom cartridge, which contained a Raspberry Pi 3 with a PlayStation emulator. The Raspberry Pi allows Alfonso to stream any RetroPie-compatible game to the Gameboy Advance through the device’s link port. 

Alfonso is limited to streaming very low-res graphics—240×160—but that’s kind of the point. Depending on how Alfonso feels about framerate when they run a game, they can elect to stream at 240×160 or 240×80 and enjoy the retro vibe intense scanlines offer, or they can go for 120×80 and utilize a slightly brighter mosaic effect. The display itself is illuminated using a backlight mod with an AGS101 display. Alfonso shared videos of their success with the mod on their YouTube channel and provided replication instructions on GitHub. 

Because RetroPie is capable of emulating a wide range of retro consoles from the Nintendo Super NES to the Atari 7800, Alfonso’s hacked Gameboy Advance is able to play far more than just PS1 games. (One of Alfonso’s videos depicts Battletoads and Super Mario RPG: Legend of the Seven Stars being played on the device, and both games were developed for the NES). But given that the PS1 wasn’t a handheld and was manufactured by an entirely different company—versus the other Nintendo consoles the modded Gameboy Advance can now emulate—Alfonso’s ability to theoretically play Metal Gear Solid on the 2.9-inch screen is extra impressive.

The Gameboy Advance is no stranger to modification. Tinkerers have long sold custom cartridges for the old-school device, allowing users to basically make their own game cartridges, even with multiple game files on one “flash cart.” Unfortunately, no matter how good their console hacking skills or comprehensive their emulators, fans of the original Gameboy Advance are unlikely to ever experience PlayStation 2+ games on the tiny screen, given the later models’ incorporation of an analog stick into their controls. 

Now Read:

November 24th 2021, 1:31 pm

Wear OS Is Growing Again, and Google Has Samsung to Thank


Google was the first big tech firm to invest in the modern smartwatch when it worked with LG, Motorola, and Samsung to release Android Wear devices in 2014. It would be another year until the Apple Watch debuted, and it was all downhill for Android Wear after that. The branding change to Wear OS didn’t help, but partnering with Samsung did. This fall, Samsung launched the Galaxy Watch4 line, its first smartwatches running Android since 2014. Last summer, Wear OS was languishing at four percent of smartwatch shipments, and now it’s at 17 percent, according to Counterpoint Research

Samsung’s history with Android-powered watches actually extends further back than Wear OS. Before Google decided how an Android-powered watch should work, Samsung made its own version of Android to run on the Galaxy Gear. It later updated that watch to the Tizen OS, which almost all of Samsung’s smartwatches have used instead of Android. There was, of course, the Gear Live, one of Google’s Android Wear launch devices. However, it was all-in with Tizen after that. 

By coming back to Wear OS, Samsung gained access to a much larger collection of software. Even with years of mismanagement from Google, the Play Store has much more software for smartwatches than Samsung could ever collect in the Tizen store. Google, meanwhile, gets Wear OS on hardware that Samsung will market harder than any other OEM. And it’s working. 

In the third quarter of 2021, Wear OS shipments reached 17 percent of all smartwatches. That’s a strong number two finish behind Apple Watch OS at 22 percent. It’s also a huge boost over the four percent number of the previous two quarters. Samsung has reason to celebrate, too. The move to Wear OS has helped to increase Samsung’s share of the market. It has overtaken Huawei this year, rising to the number two brand after Apple. 

Apple’s share of the market has dropped 10 percent over the past year, but the reason for that is clear. Apple was unable to launch its Series 7 watch alongside the new iPhones. The delayed device launched in the fourth quarter, so those numbers are not included in the report. It’s likely Apple’s new watch will give it a boost in the fourth quarter, and that could make Samsung’s year-end totals less impressive. 

Still, this is a turn-around for Wear OS, which is something Google desperately needed. Now it has to capitalize on it. Hopefully, Google doesn’t squander its second chance to be prominent in wearables like it did the first time.

Now Read:

November 23rd 2021, 3:18 pm

WSJ: Samsung’s New $17B Chip Plant Will be in Taylor, TX


(Photo: Alerkiv/Unsplash)
Samsung has announced the location for its upcoming chip plant as it seeks to “chip” in to the widespread semiconductor shortage. The company has chosen Taylor, Texas as the site for the $17 billion manufacturing hub, according to sources for the Wall Street Journal. Texas governor Greg Abbott is expected to make an announcement regarding the plant later today. 

Samsung’s new plant will use up about 1,200 acres of land and will bring 1,800 jobs to Taylor once production kicks off in 2024. The facility is part of Samsung’s $205 billion investment in chip manufacturing and biotech, which will be the company’s focus over the next three years.

Samsung has been preparing to build a new US-based chip plant for a while now, but details about its plans have been few and far between. There was even a period of time in which Samsung (alongside TSMC and Intel) threatened to pull the plug on its plans, should the locations under consideration not offer generous incentives to build (i.e. tax breaks). Clearly, Samsung’s concerns were rectified. Though the company is said to have also courted Austin, Texas, Taylor reportedly offered better tax incentives—though Samsung’s original US-based plant in Austin is expected to remain in operation following the new plant’s opening.  

Samsung’s Austin manufacturing facility. (Photo: Samsung)

The conglomerate’s plan to open a new chip plant dovetails with a global chip shortage ExtremeTech readers are by now intimately familiar with. Samsung Vice Chairman Lee Jae-yong reportedly visited the United States to speak with White House officials about the shortage and discuss federal incentives for chipmakers. According to sources for The Korea Times, Samsung planned to reveal the location for the plant once Jae-yong returned home to South Korea this week, but now the cat is out of the bag.   

The Biden administration has been pushing for increased US chip manufacturing, recommending earlier this year that a cool $50 billion be put into research, development, and actual production of the highly sought-after resource. Semiconductors are in just about everything these days, and production delays related to Covid-19 have drastically impacted output, causing the tiny chips to be in short supply. Given that the shortage isn’t expected to end anytime soon, manufacturers and government officials alike see semiconductor fabrication as an opportunity to bolster the US economy and compete with countries such as China and Taiwan, which currently lead in production.

Now Read:

November 23rd 2021, 3:18 pm

Newly Announced Exoplanet-Hunting Space Telescope Funded by Breakthrough Initiative


Image by Wikipedia. Alpha Centauri AB is on the left, Beta Centauri on the right, and Proxima Centauri is at the center of the red circle.

Move over, James Webb: humanity is about to get another eye in the sky. There’s just been a new space telescope announced, named TOLIMAN, and it’s already got funding from the Breakthrough project.

The telescope is designed around two things: its target, and the exotic optics the telescope will use. TOLIMAN’s mission is to point directly at the Alpha Centauri system, in order to search for potentially habitable exoplanets there. The system actually contains three stars; Proxima Centauri  (inside the red circle above) is confirmed to host a rocky planet in its Goldilocks zone, and there are likely several other planets elsewhere in the system.

Proposed design of the TOLIMAN space telescope. Credit: University of Sydney.

The name TOLIMAN stands for “Telescope for Orbit Locus Interferometric Monitoring of our Astronomical Neighbourhood.” Clunky, we know. But the acronym was chosen in homage to the space telescope’s target star system. Toliman is the official name of a star within the Alpha Centauri system: α Centauri B, the smaller and cooler of the binary pair around which Proxima Centauri orbits.

The 30-cm telescope is surprisingly small, for a space telescope, but its target is Earth’s nearest neighboring star system: Alpha Centauri. Alpha Cen is the brightest star in the constellation Centaurus, visible in the southern sky, and it’s about 4.3 light-years from Earth.

Image: Stellarium, via NASA

The system was first documented by Arabic astronomers during the Golden Age of Islam. The word “Toliman” itself is the Latinized version of an ancient Arabic name for Alpha Centauri, which meant “the Ostriches.” But two other stars in the Southern sky already bore that name, so to bring the name of the new star into accord with the constellation in which it was found, it was later renamed Rijl al-Qinṭūrus. This in turn was Latinized to Rijel Kentaurus, “the Centaur’s foot,” which is where TOLIMAN is going to point.

TOLIMAN’s exotic optics are its other keystone feature. The telescope will use a “diffraction pupil lens” for its observations. Multiple overlaid structural patterns are arranged on the surface of the lens, so that the different areas separate incident light by its phase.

Top: Black and white regions show three overlaid patterns on the diffraction pupil lens; black and white regions in each are in antiphase. They perform a kind of binning by phase. Bottom: The point-spread field associated with each. Right: “The patterns to the left illustrate 3 separate log-harmonic spirals, while to the right is the combined effect of the sum of all 3 log-harmonic spirals (upper) together with the corresponding PSF (lower).” Credit: study authors.

Because of this ability to distinguish separate sources, this design lends itself very well to studying Alpha Cen in particular. The system’s binary pair are only about 23 AU apart — about the distance from the Sun to Uranus. That means they have very little angular separation between them. Even so, the two stars can be clearly distinguished by the diffraction pupil design, because where they might overlap visually, using this lens means that different light sources stand out from one another in an obvious, kaleidoscopic way:

Different light sources really do stand out quite sharply from one another. Image: Fig. 3, TOLIMAN project abstract; Tuthill et al., 2018.

The international collaboration is led by Peter Tuthill of the Sydney Institute for Astronomy, and it includes teams from the University of Sydney, Breakthrough Initiatives, Saber Astronautics, and NASA’s JPL. Jason Held, CEO of Saber Astronautics, described TOLIMAN in a press release as “an exciting, bleeding-edge space telescope,” one that will be “supplied by an exceptional international collaboration. It will be a joy to fly this bird.”

Now Read:

November 23rd 2021, 3:18 pm

Owners Resort to Hacking Smart Treadmills After NordicTrack Locks Them Out


It’s natural to expect that if you buy something, you can do whatever you want with it. However, the complexity of laws around intellectual property has made that difficult. The right to repair movement is gaining steam with even Apple loosening restrictions on tinkering with your own hardware. NordicTrack is not so enlightened, though. After customers started installing their own apps on the company’s $4,000 X32i smart treadmill, it released a software update that locked them out. Owners aren’t happy. 

Exercise equipment is smarter than ever before. Companies like Peleton have made boatloads of cash by integrating subscription training services with the hardware, and that’s what NordicTrack does. The X32i is a spendy treadmill with a huge 32-inch touchscreen display, which delivers fitness content from NordicTrack’s iFit, a service that costs $39 per month. You’re probably familiar with these services, if only by reputation. There are suspiciously perky trainers urging you on, online leaderboards, 1-on-1 help, and more. 

Until now, anyone who wanted more from their $4,000 treadmill could simply unlock the device’s underlying Android OS. According to owners, the process was simple and documented in NordicTrack’s help documents. Just tap the screen ten times, wait seven seconds, and tap ten more times. With access to the Android UI, you can sideload the apps of your choice and even use the browser to access a world of content online. 

NordicTrack uses the screens on its smart exercise equipment to deliver a variety of workouts for $39 per month.

In October, NordicTrack started rolling out an update that removed the so-called “privilege mode” from all its connected workout machines. According to NordicTrack, this is just about safety. Since the software can control the mechanical components of the treadmill, it doesn’t want people to install third-party apps in a public setting (the X32i is available to both consumers and commercial buyers). Owners who relied on sideloading have suddenly found their expensive treadmills are much less useful, and they’re scrambling to find workarounds. So far, the best they’ve found involves factory resetting the treadmill. It restarts with old software that includes privilege mode. Then, you have to block NordicTrack’s update servers at the network level to keep the new software from asserting itself. 

NordicTrack says anyone who has used a workaround to access privilege mode could find their warranty voided. That hasn’t stopped owners from trying to regain some of that lost functionality. NordicTrack can swear up and down this is a safety issue, but there are smarter ways to protect machines in public settings. For example, an administrator account for personal use, which is simple to implement in Android. If anything, this sounds like NordicTrack is doing whatever it can to keep people paying $39 every month for content on that big 32-inch screen. You know what doesn’t have any content restrictions? A TV. You can just put one of those in front of a cheaper treadmill, just as our ancestors did.

Now read:

November 23rd 2021, 3:18 pm

NASA Delays Webb Telescope Launch Following an ‘Incident’


NASA has been working on the James Webb Space Telescope for 20 years, and there have been numerous delays. The marvel of astronomical technology is currently preparing for launch, but NASA says we’ll have to wait just a bit longer. Following a minor “incident,” NASA has pushed the launch of Webb back by four days. That will give the team time to check for damage one last time before launch. 

The Webb telescope will serve as the successor to Hubble, which has survived long past its intended design life. With the aging telescope on the verge of failure on an almost weekly basis, the need for Webb has never been greater. Of course, it was supposed to be in operation years ago, but building the most powerful space-based observatory in human history is no simple feat. 

Several weeks ago, Webb made its journey from the US to French Guiana, where NASA’s European partners will launch the spacecraft aboard an Ariane 5 rocket. However, NASA says that an “incident” occurred while technicians were mounting the telescope to the launch vehicle adapter, which mates the observatory to the upper rocket stage. According to NASA’s initial report, a clamp band used to secure the telescope to the adapter was accidentally released. This “caused a vibration throughout the observatory.”

Webb arrived at the launch site by barge several weeks ago.

Webb is going to have to cope with intense vibration during launch, but there’s no reason to take any chances here. Webb’s total price tag is hovering around $10 billion, but that’s nothing compared to the time it took to design and build, making it the very definition of “irreplaceable.” NASA has convened an anomaly review board that will investigate the incident and conduct additional testing to ensure the observatory is still in perfect working order. Once it’s deployed, Webb will be too far away for any maintenance missions. 

Hopefully, we’ll hear in the coming days that the telescope is fine, and the four-day pause will be the last delay before Webb finally leaves Earth behind. When it’s finally operational, Webb will be able to peer at more distant, dimmer objects than any other instrument in the world from its vantage beyond the orbit of the moon. It could help us understand the dawn of the universe, the life and death of stars, and even help study exoplanets that could harbor life. We just need to get it into space in one piece.

Now Read:

November 23rd 2021, 3:18 pm

Winamp Prepares to Relaunch: Can it Still Whip the Llama in 2021?


It’s not a new development in the tech world that bringing back “formerly loved” items from the past is cool once again, but that usually applies to old hardware like gaming consoles, smaller phones, and so forth. This time around though, it’s software that’s attempting a comeback, from a bygone era. Winamp, the formerly hugely popular music player, has plans to relaunch in 2021 according to a report by Bleepingcomputer. What’s surprising about this announcement is the software hasn’t been updated since 2013, and as we noted, people don’t use media players like Winamp anymore.

First off, if you’re under the age of 30 and are reading this, some explanation is required. You see, back in the 2000s, digital music wasn’t really a thing yet, so we used to take our music CDs and extract the files into MP3 format. This conversion allowed us to reduce the file size immensely, and also transfer the files to a portable music player like the iPod, or for a handful of folks, a Zune player. The small file size also fueled the explosion of P2P file-sharing. Though we had our mobile music needs met, we also needed software to play music on our PCs, and for that a lot of people used Winamp, ourselves included. It offered a ton of cool skins, had a visualizer, and was just fast and free, two things we appreciate in every piece of software.

Installing Winamp in 2021 certainly brings back some good memories. (The fact that you wrote an explainer for this hurt me in my soul. -Ed). 

The big question now is, since everyone uses streaming services like Spotify and Apple Music to listen to their tunes, what place does Winamp even have in today’s market? According to the website, the software isn’t just updated, it’s “remastered,” with the goal of becoming the one app you can use to connect to your favorite artists, which includes podcasters. Winamp will apparently not only be marketed to end users who just want to consume some content, but artists and creators as well who are unhappy with the arrangements provided by today’s most popular streaming services. The site states, “For artists and audio creators we’re all about giving you control over your content. We’ll help you to connect closely with your fans and earn a fairer income from doing what you love.”

Judging by all this marketing copy, the company seems intent on leveraging its nostalgic connection to it’s “80 million” users around the world, but whether it can do so in a world that has collectively moved on to an entirely new format for music consumption remains to be seen. That said, if you are curious about what the company has coming down the pike you can download the latest version from its website,  to get a feel for it. We installed it and it looks exactly like we remembered it from so many years ago. The company is also asking people to sign up for its upcoming Beta version, which will supposedly offer all the new features the company is currently teasing via its website.

Now Read:

November 22nd 2021, 8:32 pm

Rumor Mill: Apple Working on an M1 Max Duo SoC for Upcoming iMac Pro


(Image: Apple)
It’s not a big industry secret that Apple is working feverishly behind the scenes to excise every trace of Intel silicon from its Mac lineup. The company announced plans to switch from Intel to its own custom chips several years ago, and in 2020 it began the replacement process with the original M1 chip landing at the very bottom of its lineup with the MacBook Air, Mac Mini, and the entry-level 24″ iMac. Next up was the upgraded M1 Pro and Max SoCs, which landed in the company’s revamped 2021 MacBook Pros. This leaves just two models left on the upgrade path: the “big” 27″ iMac, and the pinnacle of power, the Mac Pro, both of which still use Intel processors and AMD discrete GPUs. According to a report, one of those chips will break cover soon; an M1 Max Duo SoC the company plans to drop into an all-new iMac Pro.

As its name implies, the M1 Max Duo will reportedly be two M1 Max chips connected together for double everything the Max chip offers. This translates to a 20-core CPU, and 64-core GPU, along with the ability to boast up to 128GB of RAM. There’s also Mac Pro rumors suggesting an M1 Max Quadro, with a 4x design. This is huge upgrade from the original M1 chip, which has just eight CPU cores and seven GPU cores, along with a maximum of 16GB of memory. The sources of these rumors are twofold: Bloomberg journalist Mark Gurman, and Hector Martin, who is porting Linux to Apple silicon Macs.

Apple’s M1 Max chip is a beast, but what if it was doubled or even quadrupled? (Image: Apple)

Starting with Gurman, who is a noted Apple insider, he pointed out in a recent tweet that Apple is indeed working on taking the M1 Max die and simply multiplying it both 2x and 4x for upcoming desktop chips. In his tweet he writes, “…the new Mac Pro desktop is expected to come in at least two variations: 2X and 4X the number of CPU and GPU cores as the M1 Max. That’s up to 40 CPU cores and 128 GPU cores on the high-end.” This dovetails with info from Mr. Martin, who has been elbow-deep in the MacOS code and reports, “…the macOS drivers have plenty of multi-die references, and the IRQ controller in the M1 Pro/Max is very clearly engineered with a (currently unused) second half for a second die.” If that’s not enough information from you, he adds, “For the technically minded: it’s a second set of config/mask/software-gen/hw-state registers, and the hardware inputs are all idle but you can software-gen IRQs in that block just fine and they get delivered with a die-id of 1 in the top 8 bits of the event register.” If you’re more into the video thing, YouTuber MaxTech goes into significant detail about all of these rumors.

For those of us who are silicon aficionados, it’s been fascinating to watch Apple’s moves in this market, as the M1 chips have upended the notion of what we can expect from a mobile CPU by offering both blistering performance and incredible efficiency; a rare feat indeed. Which is why the prospect of Apple delivering a chip that doesn’t need to go into a mobile device is alluring, as they can theoretically unleash the hounds since they won’t need to worry about power consumption, within reason at least. The bad news, however, is the same tipsters who are offering these tantalizing leaks are also pointing to insane price tags for this much power, with one pointing out the top-end Mac Pro could cost around $50,000. This shouldn’t come as that big of a surprise though, as you can already spend that much money quite easily on the current Xeon-powered Mac Pro tower, even without the $400 wheels.

Now Read:

November 22nd 2021, 3:17 pm

Intel Shows Off Next-Gen Chips at Fab 42


(Photo: Stephen Shankland/Cnet)
Cnet reporter Stephen Shankland recently took a tour of Intel’s sprawling Fab 42 in Chandler, Arizona, and Intel let him take a peak at a lot of its upcoming chip designs. The result is a photo gallery that is verified to be Not Work Safe if you’re as into ogling Silicon Wafers as we are.

As we wrote in 2017, Fab 42 was originally designated as the place where Intel’s future chips would be made on a 7nm process. Back then that size of node was very forward-looking as Intel was still struggling with its 10nm development, which it recently resolved with the launch of Alder Lake. Now Fab 42 is humming right along on the company’s next-gen products, and just as we predicted four years ago it’s using Extreme Ultraviolet Lithography, or EUV. This 7nm process is also known as Intel 4, with the 4 representing 4 angstroms, which is the unit of measurement below nanometer. One angstrom = 100 picometers, while one nanometer = 1000 picometers. Alder Lake was made on an Intel 10nm process, which is called Intel 7.

Highlights of Shankland’s trip include a look at the company’s 2023 chips, which are dubbed Meteor Lake. These 14th generation chips are significant because they are the first client-oriented CPUs to utilize an all-new chipset design, as opposed to the monolithic design it’s used in previous chips. This represents a stark departure for Intel, but as nodes get smaller and smaller both AMD and now Intel have turned their attention to chiplets and packaging as the key to improving performance in their next-gen offerings. Intel’s future Meteor Lake will use Foveros technology to stack chiplets vertically, as opposed to side-by-side. The chips shown to Cnet lacked functioning processing circuitry, and were apparently just being used to test the fab’s packaging functionality.

Intel’s 7nm Meteor Lake CPUs (Image: Cnet)

Other notable appearances include Intel’s absolutely massive Ponte Vecchio chip, which was designed to power the Department of Energy’s Aurora supercomputer. The chip combines every next-gen technology Intel is currently pursuing, with 47 separate chiplets connected laterally with Embedded Multi-Die Interconnect Bridges (EMIB), and vertically with Foveros stacking.

Intel’s gargantuan Ponche Vecchio processor (Image: Cnet)

Intel giving reporters a look inside its facilities seems like part of its multi-year plan to once again become known as an engineering powerhouse; a mantle it has seemingly lost to rival TSMC in the recent past. Part of this strategy includes Intel offering its silicon fabrication services to other companies, which is it now doing under the moniker Intel Foundry Service, and has even gone so far as to say it hopes to win back Apple’s business, as the iPhone maker famously jettisoned Chipzilla’s CPUs in favor of TSMC. That goes the same for AMD, which also switched to TSMC for its latest chips. As proof of Intel’s commitment to regaining the mantle of engineering supremacy, Cnet notes Intel is currently ramping up two more fabs in Arizona — Fab 52 and 62 — at the cost of $20 billion, with plans for third fab that will cost a whopping $100 billion, location unannounced thus far.

Overall, Shankland’s article provides a fascinating look inside Intel’s operations, and the photo gallery that accompanies it is not to be missed.

Now Read:

November 22nd 2021, 10:15 am

FTC Will Crack Down On Companies That Make it Difficult to Cancel Services


(Image: FTC)
The Federal Trade Commission has announced that it will begin “ramping up enforcement” against companies that manipulate people into starting subscriptions or that make it difficult to cancel services. It’s an unanticipated move that should disappoint almost no one—except maybe your cable company.

In an announcement from late last month, the FTC said companies that failed to obtain informed consent or provide clear upfront information upon selling a service would be considered guilty of using dark patterns to trick or trap customers. The agency’s new enforcement policy statement warns companies involved in such practices that without rapid changes, the FTC may pursue civil penalties, injunctive relief, and consumer redress. The statement strictly outlines what it looks like for a company to obtain a customer’s consent and prenotify customers of imminent charges. It also warns companies against using a customer’s silence or failure to act as permission to continue charging for a service. The new enforcement policy was approved by a 3-1 vote, with the only dissent coming from Commissioner Christine S. Wilson on the basis of contradiction with the FTC’s open rulemaking.

The FTC’s Office of Public Affairs. (Photo: FTC)

This isn’t the first time the FTC has cracked down on shady subscription practices. In fact, the FTC has been a major factor in trying to prevent such practices for years, having enforced rules against questionable or downright exploitative automatic renewal terms, free-to-pay conversions, and user interfaces that trick customers into sticking with a subscription even when they want to cancel. The FTC has sued companies for hiding cancellation buttons online, and for making customers listen to lengthy ads or sit on hold for extended periods of time before they can even talk about cancelling a plan. 

The frustration of figuring out how to cancel, say, an internet package or streaming service is so pervasive that it could practically be considered an American rite of passage. Mainstream television has incorporated jokes about the whole ordeal into shows like Saturday Night Live and Brooklyn Nine-Nine, knowing just about anyone watching would be able to relate. Of course, a general threat of enforcement is different from actually bringing down the hammer on service providers who love to keep customers on hold—but here’s hoping the experience of tearing one’s hair out to cancel a service becomes a little less relatable with the FTC’s new enforcement policy. 

Now Read:

November 22nd 2021, 10:15 am

Rockstar Apologizes for GTA Definitive Trilogy Launch, Will Make Classic Games Available Again


Gamers the world over have been waiting with bated breath for the release of Rockstar’s GTA Trilogy, and maybe it wasn’t worth the wait. The launch last week was an unmitigated disaster, with gamers complaining of bugs and ugly graphics until Rockstar’s authentication system broke, making the games impossible to play. Rockstar has now apologized for the mess and confirmed a bug-fixing patch is incoming. By way of contrition, Rockstar will also make the original versions of the games available once again. 

The official name of the revamped games is “Grand Theft Auto: The Trilogy – The Definitive Edition,” and it includes three iconic titles: GTA3, GTA: Vice City, and GTA: San Andreas. These games launched between 2001 and 2004, setting the standard for open-world gameplay. When modernizing the games for 2021, Rockstar opted to go with a simplified cartoon style. Most fans have expressed disappointment that Rockstart didn’t go further and make the games more detailed. Along the way, developers also introduced a number of glaring bugs that weren’t in the original versions. 

In its apology, Rockstart admits that the Trilogy has technical problems and doesn’t meet the studio’s standards. Apologies are all well and good, but they don’t make busted games work correctly. To that end, Rockstar says it is working on a bug-squashing update that will roll out soon. Although, “soon” could mean a lot of things in this context. The authentication issue was rectified last week, but it’s still a trio of broken games after launch. 

Rockstar also seems willing to admit that removing the classic versions of these games in advance of the Definitive Edition release was a bad idea. So, they’re coming back, sort of. Rockstar says that it will release a new bundle of all three classic games on its website. In addition, anyone who bought The Definitive Edition or buys it through June 30, 2022 will get the classic editions at no extra cost. However, Rockstar has not said whether or not it plans to return the classic versions to other digital storefronts like Steam. 

If you were on the fence about the GTA Trilogy, you might want to wait and see how this upcoming patch works out. If it can stomp some bugs, then all you have to get past is the questionable visual design. Maybe there will be a better version in another 20 years.

Now read:

November 22nd 2021, 9:01 am

MediaTek Unveils Fully Loaded Flagship Mobile Processor


In most western nations, MediaTek plays second banana to Qualcomm. The best, and most powerful phones all run on Qualcomm ARM chips, but that might change soon. MediaTek has just announced its first true flagship system-on-a-chip (SoC) in years, which it’s calling Dimensity 9000. It’s the world’s first 4nm chip from TSMC, and it has all the latest technology. Still, this isn’t MediaTek’s first attempt to get into the premium segment, and Qualcomm is entrenched, to say the least. 

Despite its reputation as an also-ran chipmaker, MediaTek has had a very successful few years. Its components power smart speakers, the new Kindle, and uncountable mid-range smartphones around the world. In fact, MediaTek recently attained the top spot among global SoC makers with a 40 percent overall market share and 28 percent of the 5G market. In its new position as the market leader, the Dimensity 9000 starts to make more sense. 

MediaTek didn’t go crazy cramming in ten CPU cores as it did with its last flagship attempt. The Dimensity 9000 is an octa-core chip, but it does have the latest core designs from ARM—they’re so new you might not even recognize the names. At the forefront is a Cortex X2 clocked at 3.05GHz, and that’s paired with three slightly slower 2.85GHz Cortex-A710 cores. The rest of the CPU is composed of four Cortex-A510 cores at 1.8GHz. The new X2 core clock is surprisingly high, which could give MediaTek a single-threaded processing advantage even as Samsung and Qualcomm move to adopt the X2 down the road. 

MediaTek didn’t stop with the CPU—the Dimensity 900 is also the first mobile SoC with support for LPDDR5X memory. The company says that it will allow for a 17 percent boost in available memory bandwidth at up to 7,500 Mbps. And yes, the GPU is new, as well. The Dimensity 9000 will be the first SoC to ship with the Mali-G710 GPU. Each GPU core has about twice the performance of older G78 cores, and there are ten of them. It also supports software ray tracing on Vulkan. 

Google has talked up the AI capabilities of its new Tensor chip, and MediaTek is hoping to compete with the fifth iteration of its custom NPU. The company says the new APU core is four times more powerful and four times more efficient than the last one. That hypothetically puts it just in front of Google’s Tensor. Naturally, the image signal processor (ISP) supports all the latest ultra-high-resolution displays, HDR, high refresh rates—the whole nine yards.

The only thing the Dimensity 9000 is missing is millimeter wave 5G. It has sub-6 support that will cover every carrier in the world, but the lack of mmWave might make it a tough sell for the US market. For that reason, Qualcomm will probably retain its niche in the US market. That’s a bummer because mmWave is bad, and the Dimensity 9000 seems pretty good. The first devices with this chip should appear early next year.

Now read:

November 19th 2021, 9:14 pm

DuckDuckGo Announces App Tracking Prevention for Android


Your smartphone goes everywhere with you, and the data it contains describes every detail of your daily life. Naturally, advertisers want to use the marvel of mobile technology to improve ad targeting, and DuckDuckGo wants to put a stop to that. The company has announced a tracking blocker for its Android app. The feature isn’t live yet, but you can join the waitlist today. 

When you open an app, the developer isn’t the only one getting data from you. Most apps — DuckDuckGo says 96 percent, to be exact — include third-party tracking code from the likes of Google and Facebook. These systems don’t need to spy on you 24/7, though. Even a trickle of data over weeks can give companies an idea of where you’re going, what you’re doing when you get there, and even when you go to bed. 

Apple recently rolled out a feature called App Tracking Transparency (ATT), and DuckDuckGo says its upcoming feature is similar. On iOS, apps have to ask for permission to track you across other apps and services, and most people say “no.” Facebook, Twitter, Snap, and other big names are reporting billions in lost revenue following the release of Apple’s solution. 

The DuckDuckGo system won’t be integrated into the OS, so it might not be as effective as Apple’s App Tracking Transparency. However, it’s free and will operate entirely on your device using Android’s built-in VPN functionality. Rather than route your data through a remote server like most VPN services, App Tracking Protection (ATP) will run in the background and alert you if it sees a request going to a third-party and block it. You can also disable ATP for select apps. 

The user will get a running tally of the trackers that ATP blocks, as well as showing where those requests were going. DuckDuckGo says that in its testing, 87 percent of apps sent data to Google and 68 percent sent data to Facebook, and ATP can allegedly stop that without breaking apps. It doesn’t sound like this solution will be as elegant as Apple’s, but it’s the best we’ll get unless Google forgets that it’s an advertising company and adds its own version of tracking protection. 

The feature is available in beta, but you can’t just start using it right away. You have to download the DuckDuckGo app and sign up for the waitlist (in settings). The company didn’t estimate how long it would take to get testers onboarded.

Now read:

November 19th 2021, 12:58 pm

Apple Granted Patents for All-Glass iPhone and Pro Tower


(Image: USPTO)
Apple patents a whole lot of ideas it has, just to be safe, and some of them even come to market, eventually. A new patent that is dated November 16, 2021 is a bit different than most in that it shows designs for all-glass Apple products, including its iPhone, Apple Watch, and even a theoretical glass Mac Pro tower. An all-glass iPhone would surprise nobody, but PC tower made of glass? That’s definitely, “thinking different.”

Of course, the headline product in the filing is its glass phone, which is described in the abstract as an, “Electronic device with glass enclosure.” As 9to5Mac accurately notes, it’s almost as if Apple has patented a glass box. The Abstract for the filing describes it as such, “An electronic device includes a six-sided glass enclosure defining an interior volume and comprising a first glass member and a second glass member. The first glass member defines at least a portion of a first major side of the six-sided glass enclosure, at least a portion of a peripheral side of the six-sided glass enclosure, a first region along the peripheral side and having a first thickness, and a second region along the peripheral side and having a second thickness different from the first thickness. The second glass member is attached to the first glass member and defines at least a portion of a second major side of the six-sided glass enclosure. The electronic device further includes a touchscreen display within the interior volume and positioned adjacent at least a portion of each of the six sides of the six-sided glass enclosure.”

The filing shows a display on the back of the phone, which would be a first (Image: USPTO)

Sorry you had to read all of that. What’s interesting to note in the included drawing is the phone has notifications along the bottom edge of the phone for the AAPL stock price and current temperature, with the side buttons inlaid on a curved edge allowing for Airport mode, Wi-Fi enabling, and volume control. The filing also shows the capability to have a display on the back of the phone, and as PatentlyApple points out, the filing shows the inclusion of an example display component that may define six display regions. In the lengthy filing Apple notes the design discusses the challenges inherit in producing, “…a device with multiple displays viewable through multiple transparent sides–including embodiments where displays are visible through each of the six main sides of the device.” Obviously a device with multiple displays is nothing new, and the same goes for phones with displays on the edges, but a phone with a display on every side is a whole new ball of wax, indeed.

Apple’s included almost no information about its all-glass Mac Pro tower in the filing. (Image: USPTO)

To wrap things up, Apple also included a drawing of an all-glass Mac Pro tower, which is kind of a head scratcher at first, but upon reflection it would actually be kind of neat. Those of us who have gaming PCs know the feeling of pride we feel as we gaze through our case’s tempered glass window to stare at our lovely components in-action, which is partly responsible for the current RGB craze (which we admit, we are guilty of appreciating). Imagine an all-glass Mac, transparent on every side, with a huge M1 Max Extreme CPU pumping away inside with some sort of Apple-designed, closed-loop cooling system. That could actually pretty cool, pardon the pun.

Now Read:

November 19th 2021, 12:58 pm

Google Confirms the Pixel 6 Doesn’t Charge As Fast As We Thought


Google’s recently released Pixel 6 and Pixel 6 Pro are huge steps forward for the company’s smartphones. Google is finally getting serious about its hardware by offering big batteries, great displays, and a custom ARM chip called Tensor. Google even boosted the Pixel series’ anemic charging speed, although, it’s not as fast as we thought. Google has clarified how the latest Pixels charge, and the true maximum rate is not the full 30W supported by the new charger. 

Like almost everything that isn’t an iPhone, the new Pixel phones charge over USB-C. The last few Pixels were stuck at 18W maximum charging speed, but Google announced the 2021 Pixels will work with its new 30W charger, which uses the latest USB Power Delivery (USB-PD) features. We all took that to mean the Pixels could charge at 30W, but Google has published a support article that explains why that is not the case. 

Google’s battery explanation has a hint of Apple hand-waving. It shied away from listing the actual rated charging speed on the official spec table, preferring to give rough estimates of charging times as measured with the 30W adapter. For the record, the Pixel 6 can charge to 50 percent in about 30 minutes, which is respectable but not incredible. 

According to the new support page, the Pixel 6 tops out at 21W and the Pixel 6 Pro can hit 23W. Like all smartphones, those speeds are only attainable when the batteries are mostly empty. The speed drops as the battery becomes full, which helps avoid excess wear on the cells. Google says that it designed the battery and charging system to “strike a balance between battery life, longevity and fast charging.” One way Google accomplishes that is by tying the phones’ charging behavior into the Pixel’s machine learning capabilities. Adaptive charging can actually pause charging above 80 percent or slow charging to a trickle overnight in certain circumstances. 

Google’s messaging here has been lacking, but at least we have all the technical details now. If you want to pick up Google’s official charger, it should be very versatile. Not only does it support standard USB-PD up to 30W, but it also has PPS (programmable power supply) support. That’s what the Pixels and a few other cutting edge devices use to more efficiently vary voltages. So it should fast charge your Pixels and anything else with a USB-C port up to 30W. It’s $25 from Google, or $35 if you want a cable with it. Any third-party USB-PD charger with enough capacity should be able to hit the Pixel 6 and 6 Pro maximum charging speeds (based on my testing), but they won’t be as efficient.

Now read:

November 19th 2021, 12:58 pm

Microsoft Chief Calls for Commitment to Game Preservation


(Photo: Kamil S/Unsplash)
Phil Spencer, VP of Gaming at Microsoft, is calling on gaming companies across the industry to work toward preserving older games. 

In a statement to Axios, Spencer shared that he hopes more companies will follow in Microsoft’s footsteps by making older titles available through software emulation, something most easily enjoyed by PC and Xbox users. While games from other brands often remain locked to their console counterparts (cough PlayStation) Microsoft has long openly advocated for the preservation of games that are past their prime. It’s why the company’s latest console releases, the Xbox Series X and S, are considered emulation powerhouses: each can play games made for Xbox consoles as far back as the original Xbox, which came out in 2001. 

Spencer likens gamers’ ability to revisit old titles to our collective ability to revisit old art. “I think we can learn from the history of how we got here through the creative,” Spencer wrote in a message to Axios. “I love it in music. I love it in movies and TV, and there’s positive reasons for gaming to want to follow.”

Generally speaking, emulation allows individuals to play games that aren’t necessarily made for the system they’re playing on. This is most often used for retro games, like the original Spyro the Dragon, Street Fighter, and Donkey Kong, to name a few, though it can also be used for games that simply aren’t backwards compatible with a console one generation away.  

Blinx: The Time Sweeper is an Original Xbox game that can be emulated on the Xbox One and Xbox Series X/S.

It’s important to note that some forms of emulation skirt legal boundaries, when they involve the sharing of copyrighted ROMs. (Emulators essentially serve as a software-based recreation of the console, while ROMs are the games themselves.) Spencer is specifically calling for legal emulation, in which it’s up to video game companies to find ways to keep older titles alive—not on individuals to swap ROMs on a gray market. 

“My hope (and I think I have to present it that way as of now) is as an industry we’d work on legal emulation that allowed modern hardware to run any (within reason) older executable allowing someone to play any game,” Spencer said.

While it’ll take some time to see if other brands follow through on Spencer’s call, Nintendo is already making a bit of headway. At its latest Nintendo Direct it announced the company recently brought several Nintendo64 and Sega Genesis titles to the Nintendo Switch, where gamers can now revisit Mario Kart 64, the original Pokemon Snap, and Legend of Zelda: Majora’s Mask. By doing so, Nintendo breathed life into a good chunk of its original library—and at its core, that’s what emulation is all about. 

Now Read:

November 19th 2021, 12:58 pm

Semiconductor Industry Forecast Projects Huge Gains for AMD, Losses for Intel and Sony


Semiconductor market research company IC Insights has released its November update, which ranks the top 25 semiconductor companies based on projected year-on-year sales, and also includes some forecasts for the future. In its ranking of the top 25 companies based on expected sales, AMD is leading the way with an anticipated 65 percent year-on-year growth, thanks to its strong product lineup across both its GPU and CPUs, along with gains it’s made in the data center business with its Epyc CPUs. Bringing up the rear in 25th place is Sony, which is expected to show a small decline in business at -3 percent, with Intel right above it at -1 percent. The two tech giants were the only companies on the list with anticipated negative performance for the year.

Overall, the report paints a bright picture for the the industry as a whole, with a projected annual growth of 23 percent. It notes this strong growth is being driven by several factors: changing habits caused by the pandemic and the rebound from it, a 20 percent increase in semiconductor shipments, and an average three percent higher average sale price (ASP). The report highlights the fact that growth like this hasn’t been experienced in the industry since 2010, when it grew 34 percent following the 2008/2009 economic recession.

IC Insights’ Top 25 Companies by projected sales (Image: IC Insights)

As far as Intel and Sony are concerned, the report describes the expected performance of these two behemoths in the middle of a booming period is, “utterly amazing,” but it does not mean that in a good way. It notes that Intel, which is the world’s second-largest supplier of semiconductors, faces several challenges, chief among them are the ongoing supply chain issues. This has resulted in reduced sales of Intel-based laptops since its partners have been unable to get all the parts they need from their suppliers. The report states that Intel is not standing still, however, as its Intel Foundry Service has just begun shipping chips to paying customers recently, and the company is hoping this new business unit will become a significant source of revenue in the coming years.

Sony’s ranking is also the result of supply chain issues, according to the report. It remarks that the company hasn’t been able to make enough Playstation 5 gaming consoles due to shortages of parts, resulting in lower than expected revenues. Also, it’s also been previously reported that Sony is looking to partner with TSMC for a joint venture that would hopefully ease some of the strain on chip supplies, and also help protect the company in case of future disruptions. However, that facility won’t be online until 2024 at the earliest, by which time we are hopefully beyond the current issues affecting the global supply chain.

Rounding out the top four on the list are all “fabless” companies that outsource the production of the silicon wafers that make the chips at the heart of their products. Coming in just below AMD are tech titans Mediatek, Nvidia, and Qualcomm, which are all projected to show gains over 50 percent for the year, which is indeed impressive. Nvidia’s appearance at the top of this list is no surprise, as it just reported record revenue for the third quarter.

Now Read:

November 19th 2021, 12:58 pm

Nvidia Announces Record $7.1 Billion Q3 Revenue


The best GPU money currently can't buy.

On Wednesday Nvidia announced it’s making more money than it ever has before, pulling in a record-breaking $7.1 billion in the 3rd quarter ending October 31, 2021. According to the summary posted by Nvidia, this represents a whopping 50 percent increase compared to the same quarter last year. In fact, the best way to summarize the results of the quarter is to simply say, everything is up, everywhere, across the entire company. Everything Nvidia has for sale is selling like hotcakes, and will probably continue to do so for the foreseeable future (we are not financial advisors).

Overall revenue, as stated previously, is a new record. Broken down a bit, its Data Center revenue of $2.94 billion is also a new record, and is up 55 percent from last year’s quarter. Its gaming revenue is also a new record at $3.22 billion, which is up a crazy 42 percent from a year earlier. Founder and CEO of Nvidia, Jensen Huang, explained the boom thusly, “Demand for NVIDIA AI is surging, driven by hyperscale and cloud scale-out, and broadening adoption by more than 25,000 companies. NVIDIA RTX has reinvented computer graphics with ray tracing and AI, and is the ideal upgrade for the large, growing market of gamers and creators, as well as designers and professionals building home workstations.”

(Image: Nvidia)

As anyone who has tried to buy a video card in the past year knows, demand is indeed surging, but apparently that’s true for all of the company’s products, including Gaming, Data Center and Professional Visualization market platforms, according to the summary. As if it even needs to be stated, the company also beat its quarterly expectations for sales, and also saw its market cap hit $800 billion, causing CNBC’s Jim Cramer to declare Nvidia as the next trillion-dollar company, which does not seem like a far-fetched conclusion.

One very interesting tidbit in the earnings summary is that according to CFO Kolette Cress, “Nearly all our desktop Ampere architecture GeForce GPU shipments are Lite Hash Rate in our effort to direct GeForce to gamers.” If you thought it was making all this money due to crypto miners, that does not appear to be the case. According to a summary on Tom’s Hardware, a year ago Nvidia sold $266 million worth of its specialized Cryptocurrency Mining Processor (CMP) GPUs in Q3, then $150 million in Q1 FY2022, followed by just $105 million in Q3. Nvidia’s CFO notes, “We do not have visibility into how much this (sales of mining cards) impacts our overall GPU demand. Volatility in the cryptocurrency market, including changes in the prices of cryptocurrencies, can impact demand for our products and our ability to estimate demand for our products.”

Going forward, the gravy train is showing no signs of slowing down, not in the next quarter, or perhaps even far into next year. Nvidia estimates it Q4 revenue will be an even higher $7.40 billion, plus or minus two percent, and it notes in the earnings summary it has already placed several long-term supply agreements to make sure it has enough parts in the future. These agreements required it to make one payment of $1.64 billion to its suppliers, with another $1.79 billion due in the near future.

Now Read:

November 18th 2021, 9:27 pm

Microsoft Enables Edge Sync By Default, Hoovering Up Your Data in the Process


When I launched Edge this morning, I was surprised to see a message informing me that Sync was now enabled and my data was being uploaded to the cloud, to be shared with other PCs where I was also logged in.

I don’t use synchronization services between browsers because I have no interest in sharing this information with Google, Firefox, Microsoft, or any other company. Google definitely tries to push end users to activate synchronization services when they first sign into the browser, but if you turn the feature off it stays off thereafter. Microsoft has taken a different tack.

To be honest, I would’ve chalked the problem up to user error if I hadn’t seen this post from security researcher Bruce Schneier. According to him, “I received email from two people who told me that Microsoft Edge enabled synching without warning or consent, which means that Microsoft sucked up all of their bookmarks. Of course they can turn synching off, but it’s too late.”

It says “confirm” because I recreated the screenshot. When I opened this page the first time, Sync was enabled and active by default.

This kind of user-unfriendly behavior is par for the course at Microsoft these days. The company recently admitted that its decision to prevent end users from changing their browsers in Windows 11 via EdgeDetector is a deliberate crackdown on choice that the company does not intend to roll back.

When asked about its decision to prevent end users from changing Edge as a default browser for certain activities in Windows 11, a Microsoft spokesperson said:

Windows openly enables applications and services on its platform, including various web browsers. At the same time, Windows also offers certain end-to-end customer experiences in both Windows 10 and Windows 11, the search experience from the taskbar is one such example of an end-to-end experience that is not designed to be redirected. When we become aware of improper redirection, we issue a fix.

But there’s nothing “improper” about the redirection as contemplated in the example above, and it’s interesting that Microsoft feels it has the right to say otherwise.

What Does it Mean to Use a Computer ‘Properly?’

Microsoft’s phrasing is instructive and it tells us a lot about how the company views end users in 2021. Let me offer a counter idea:

There is no such thing as an “improper redirection” when said redirection reflects the deliberate choice of the end user, provided the chosen option creates no criminal or civil liability, causes no security issues, and harms no individual.

It’s a bit wordy, I admit, but this is not simply a matter of semantics. The “P” in “PC” stands for personal computer. The concept of personal ownership — of sovereignty — is baked into the name. This was not accidental. The PC market exploded as it did because personal computers offered personalizability that existing mainframe and minicomputer systems couldn’t provide. These earlier systems were powerful, but they were not personal.

An “improper redirection” sounds like a minor issue. On the surface, it is. Look deeper, and this is part of a long-term pattern. It’s now far more difficult to change your default web browser in Windows 11. Users running Windows Home are forced into signing up for a Microsoft account. Now, Microsoft is admitting it believes some choices a PC user might take to safeguard browser choice constitute “improper behavior.”

Improper to who? Presumably not to the person who initiated it. What gave Microsoft the idea it was part or party to that decision? Did a Zoom conference get scheduled by mistake?

It is improper, according to Microsoft, for you to configure your own system to use the web browser of your choice. It is proper, according to Microsoft, for the company to turn on a sync feature that uploads your data to its servers automatically without first seeking consent.

Last year, Microsoft took heat for the way Edge silently imported data from other browsers at startup. These incidents show the company learned nothing from the user outcry. It’s still pushing applications like PC Health Check to end users whether they want it or not. At this point, it seems foolish to expect anything different.

Microsoft clearly believes it has the right to compel its end users to use Edge and to share sensitive browsing data with itself by default, whether end users opt-in or not. Microsoft used to seem like a company that treated end-user data with more respect than Google or Amazon, but the company’s behavior in the six years since the “Get Windows 10” campaign has dented that reputation.

Now Read:

November 18th 2021, 12:59 pm

FBI, CDC Investigating Vials Labeled ‘Smallpox’


A modern smallpox vaccination kit. (Photo: CDC)
The FBI is investigating a handful of newly-discovered frozen vials labeled “smallpox,” according to the Center for Disease Control. The vials were found in a vaccine research facility in Montgomery County, Pennsylvania when a lab worker was cleaning out a freezer. Of the 15 vials located, five were labeled as “smallpox” while the other ten were labeled “vaccinia.” 

Thankfully, sources don’t believe anyone has been exposed to the disease. “There is no indication that anyone has been exposed to the small number of frozen vials,” the CDC told CNN. “The laboratory worker who discovered the vials was wearing gloves and a face mask. We will provide further details as they are available.” The vials also appeared to be intact when they were found. 

A similar incident occurred in 2014, when employees at the National Institutes of Health happened upon six vials of smallpox while moving lab materials. Back then biosafety personnel couldn’t identify any real risk of infectious exposure, either, though two of the vials contained “viable” virus, or disease that still lives outside of a host. The vials were transported to a high-containment CDC facility in Atlanta, where they were tested and then safely destroyed. A similar fate may await the vials just discovered in Montgomery County, which were transported to the CDC yesterday. 

The CDC’s Atlanta campus—one of two locations globally where the smallpox virus is meant to be stored. (Photo: CDC)

Smallpox hasn’t been much of a concern since 1980, before which it killed 300 million people across the span of a century. Thanks to the first vaccine developed against a contagious disease, smallpox eventually became the only human disease ever to be fully eradicated—and it’s been that way ever since. While the average person no longer receives inoculation against smallpox, vaccines are doled out to researchers and members of the military who run the risk of encountering the virus in their work. And just in case, the American government has a stockpile of smallpox vaccines large enough to inoculate everyone in the United States. (Whether that’s comforting or terrifying, you decide.) 

The smallpox virus is only meant to be stored in two locations globally: at the CDC in Atlanta and at Russia’s version of the CDC. The disease is so deadly that its lab samples are intentionally restricted. All other samples were meant to be destroyed when smallpox was eradicated decades ago.

Now Read:

November 18th 2021, 11:28 am

VR Hardware CEO Says Haptic Glove Tech from Meta Looks Familiar


A prototype haptic glove design by Meta (Photo: Meta)
On Tuesday Facebook, er, Meta, published a blog post showing off the work it is doing on bringing the sense of touch to the worlds of virtual and augmented reality with haptic gloves. The post caught the attention of Seattle-based Haptx, which is a company that has been designing similar technology since 2012. In a statement to Geekwire, Haptx CEO Jake Rubin said Meta’s gloves “appear to be substantively identical to HaptX’s patented technology.”

Meta says the gloves it is working on are the result of the work it’s done in four distinct areas of research: perceptual science, soft robotics, microfluidics, and hand tracking. One of those areas, microfluidics, is an area Haptx is heavily invested in, as it goes a long way in solving the problem of how to put a huge number of sensors on a haptic glove in a way that’s unobtrusive. This is a much more accurate and tunable solution than current methods, which include force feedback and vibration. Also, traditional mechanical sensors and actuators generate too much heat, thus the need for a different technology.

Haptx’s Gloves DK2 feature microfluidic skin with 130 sensors per-hand, and is able to displace skin up to 2mm, according to the Haptx website. Though Meta doesn’t go into detail about the specs of its gloves, it states, “…it’s building the world’s first high-speed microfluidic processor — a tiny microfluidic chip on the glove that controls the air flow that moves the actuators, by telling the valves when and how far to open and close.” Haptx CEO Rubin told Geekwire similarities between the two designs include, “a silicone-based microfluidic tactile feedback laminate and pneumatic control architecture.”

These microfluidic systems will allow air to flow through micro channels on the surface of the glove, with the air flow controlled by tiny actuators. The gloves would also have to know where they are in relation to other objects in the world, how much pressure to apply to which part of the hand, and for how long, so it’s easy to see why the prototype gloves Meta is showing look like they are the opposite of comfortable. Meta also notes in a very deep-dive blog that one of the big challenges with the current state of AR and VR, there is no current technology that can stop your hand if you tried to push it through a solid object such as a wall or desk, or something that would stop your hand from closing completely when picking up an object such as a piece of fruit.

As far as Haptx’s claims go, the CEO made a statement to Geekwire that certainly sounds like the company is considering its legal options. He noted, HaptX would “look forward to working with them to reach a fair and equitable arrangement that addresses our concerns and enables them to incorporate our innovative technology into their future consumer products.” Geekwire reached out to Meta about these claims, but received no response from Meta.

Now Read:

November 18th 2021, 11:28 am

Qualcomm Laying Plans for ‘Inevitable’ Transition to ARM PCs


Qualcomm has dipped its toe in the waters of desktop computing previously, but it was leaning on mobile-optimized parts. That will change going forward — Qualcomm believes that the move to ARM architectures in computers is inevitable, and it wants to be ready with new, more powerful processors. 

The first notebook chips from Qualcomm appeared in 2017, but uptake has been slow. Only a handful of Windows machines have used the Snapdragon 7c and 8c platforms, and usage on the Chromebook side is only slightly better. It’s not hard to see why Qualcomm has been slow to gain market share, as most Windows apps aren’t built for ARM, and these CPU cores are based on mobile designs. Current x86 processors are just faster. Now, Qualcomm is ready to talk about what comes next as it targets notebooks. 

Qualcomm began laying the groundwork for this change almost a year ago when it announced the acquisition of CPU startup Nuvia for $1.7 billion. That team has gone on to design a new generation of ARM-based CPU cores that Qualcomm will use to compete directly against Intel, AMD, and Apple. We don’t know what manufacturing process Qualcomm is expecting to use, but 5nm seems likely based on timing. Current flagship parts are 5nm, but there’s a chance Nuvia could be working with 3nm in mind. Qualcomm previously said it expected to manufacture the first 3nm chips in the second half of 2022. 

We know from past ARM products that it should be possible to revamp the CPU for computer workloads with the help of Nuvia engineers. Apple has succeeded in doing that, but it has also made impressive strides in GPU performance. The M1 Max can rival mobile RTX 3060 GPUs in some tests. That might be more challenging for Qualcomm. It says the existing Adreno mobile GPU will be scaled up to desktop levels but doesn’t explain how that will work. However, it did get Adreno from ATI (now part of AMD), so some of that team has experience with desktop computing. 

The Snapdragon 8c compared to a coin.

As the new Qualcomm chips begin appearing, we probably won’t see as much overlap between mobile and personal computer Snapdragon chips. The Nuvia CPU cores will be tailored to the kind of computing workloads people expect on a computer rather than a phone, and they’ll probably need more power. However, it doesn’t rule out that Nuvia cores could occasionally appear in other devices like cars, servers, and maybe even mobile devices. 

The company will make these chips available as manufacturing samples in August 2022, and we could see devices with updated Qualcomm systems-on-a-chip (SoC) in early 2023.

Now read:

November 18th 2021, 10:27 am

What Ingenuity and Perseverance Have Discovered on Mars So Far


Perseverance deployed Ingenuity earlier this month.

Perseverance and Ingenuity landed on Mars almost exactly nine months ago. Over that time, both vehicles have already expanded our understanding of Mars, which will only increase over time as both vehicles will continue to conduct further experiments in the months and years to come. Perseverance is a rover based on Curiosity’s general design but with its own unique capabilities and some features Curiosity lacked. Ingenuity is the small helicopter that became the first human-built vehicle to fly on another world in the spring of 2021. Together, they fight crime do science.

We’ve rounded up the major discoveries made by both the rover and its copter buddy, as well as the technologies deployed on each.

Water, Soil, and the Search for Ancient Life

The principal mission of Perseverance and Ingenuity is to search for signs of ancient microbial life on Mars. In service to that goal, the Perseverance rover is loaded with imaging hardware, with which it can capture EM emissions from radio to hard UV. With these instruments, the rover is equipped to make on-the-spot judgments about what’s in the Martian regolith, rocks and atmosphere.

One such piece of imaging tech aboard Perseverance is the Scanning Habitable Environments with Raman & Luminescence for Organics & Chemicals (SHERLOC) instrument. SHERLOC is a boresighted resonance Raman and fluorescence spectrometer. To break that down a bit, it uses a UV laser to ping tiny bits of grit with photons of controlled wavelength, so we can analyze the photons thrown back off the sample to record their wavelength, polarization and other metadata. SHERLOC’s sidekick and “second eye,” WATSON, zooms out where SHERLOC zooms in: WATSON is a wide-angle camera used to document the spectrometer’s samples on the macro scale, just like a terrestrial photographer would try to get good macros of a subject with a DSLR.

Another instrument is the ground-penetrating Radar Imager for Mars’ Subsurface Experiment, known as RIMFAX. Radar waves are sensitive to the dielectric properties of the materials they survey, much like X-rays are sensitive to the absorptive properties of the materials they move through. RIMFAX will peer below the surface, as far as ten meters down into the regolith. Ultimately, this differential response allows us to make a 3D radar model of the subsurface stratigraphy of Mars, with a resolution or voxel size of ten centimeters. RIMFAX is there to characterize what lies below the landing site, but also to look for water ice hidden below the Martian surface where it might not have been sublimated away.

Ingenuity and Perseverance, in a selfie they sent from Mars. Image credit: NASA

So far, perhaps the biggest discovery is that there is montmorillonite clay in the bottom of Jezero Crater. Clay is formed by the weathering of silicate rock in the presence of water. Even if there’s no detectable water ice in the deposit at the moment, clay is taken as a dead giveaway that water was present at one point. Evidence indicates that Jezero once held a crater lake, fed by a river whose delta opens up into a formation called Three Forks on the northwestern edge of the crater.

The Mars 2020 mission is looking for a lot of things, but one of the most important is water — and Jezero was full of it. Here on Earth, water is life: we think the very first amino acids stumbled into being from water full of organic building blocks. It’s believed that if there are traces of life on Mars, they’ll be where the water is, or at least where it was. (Martian lava tubes are another hopeful candidate, because of the traces of early life found in terrestrial lava tubes.) What’s more, we think that the delta is full of boulders. How did they get there? On Earth, rocks that size are only lifted and carried by sizeable floods. Images and data that RIMFAX, SHERLOC and WATSON gather will help us learn more about the history of Mars’ geological features, including just how wet ancient Mars might have been.

Material Science

The total combined mass of Perseverance is just over a metric ton, with the chassis and instrumentation accounting for much of its weight. But Perseverance also carries an ultralight lab, in which it’s carrying out several side experiments to test how things behave in the Martian atmosphere, gravity, and radiation.

The star of the onboard material science show is the SHERLOC spectrometer’s calibration panel. Perseverance is carrying a plate inlaid with swatches of almost a dozen different materials, which SHERLOC will use to calibrate itself. One standout is a section of the polycarbonate material NASA uses in helmet visors. Confirming NASA’s awareness of their weapons-grade backronym habits, the polycarbonate is backed with a piece of opal glass bearing the fictional Sherlock’s street address. It doubles as a geocache for the public. You know, because we’re all just hanging out up there with our GoPros. (Please, please, let me live long enough for my response to the idea of a geocache on Mars is to change into sincere excitement at the ability to buy tickets.)

Also on the swatch plate are sections of certain ultra-high-performance flexible composite materials that we use to make the rest of the space suit. There’s a piece of Ortho-Fabric, a tiramisu of thermal/MMOD protective fabrics including Teflon, Kevlar, elastic Dacron for compression, and insulating Gore-Tex fleece. There’s also a piece of the Teflon-coated fiberglass fabric that shields the space suit’s gauntlets, and one of an upgraded version with a new dust coating. In between using them to calibrate its sensors, SHERLOC will periodically examine these swatches to see how they fare under constant exposure to the radiation on Mars’ surface, and the abrasion of its pervasive regolith dust.

SHERLOC’s calibration plate. Top row, from left: aluminum gallium nitride on sapphire; a quartz diffuser; a slice of Martian meteorite; a maze for testing laser intensity; a separate aluminum gallium nitride on sapphire with different properties. Bottom row, from left: fanservice; Vectran; Ortho-Fabric; Teflon; and coated Teflon. (Image: NASA/JPL-Caltech)

While SHERLOC will stare unblinking at our own materials to see how they behave, MOXIE does its work by chemically changing matter it gathers onsite. It speaks to the Mars 2020 mission directive to gather data and test technologies that will help prepare for crewed missions to Mars. MOXIE, short for the Mars Oxygen In-Situ Resource Utilization Experiment instrument, is on Mars to see what it takes to make breathable amounts of oxygen out of Mars’ CO2 atmosphere. To do this, it draws in CO2 and pressurizes it to about one atmosphere. Then it electrolytically snaps the CO2 apart right at the cathode, leaving good old diatomic oxygen along with some carbon monoxide waste and some residual CO2.

MOXIE also serves the broader mission goal of using in-situ resources to answer in-situ needs. We are trying to explore, and potentially colonize, other planets. With our current best propulsion technology, the trip between Earth and Mars is still most easily described in months, not miles. It’s not plausible to ship bottled air to another planet as the sole source of everyone’s next breath — let alone to send bulk metal and concrete to space, when there’s a whole rocky planet underfoot to source our infrastructure from if we can. This is another entry in NASA’s long history of launching one mission along with hardware and intent to clear the way for the next.

Low-Pressure Flight

After fifteen flights, Ingenuity is still going strong. It has now flown almost three kilometers, and now that it has finished its demo phase, it’s now engaging with Perseverance on their joint objective to pore over the Jezero Crater.

Ingenuity began its illustrious career on Mars as a technology demo for rotorcraft flight in a low-pressure environment, well below anything we encounter here on Earth. Mars’ atmosphere at surface level is about one and a half percent the density of Earth’s atmosphere at sea level. For comparison, the all-time helicopter altitude record on Earth reached the high-end of the passenger jet cruising altitude range, at approximately 42,500 feet. Even at that height the air on Earth is thirty times denser than Martian STP (Standard Temperature and Pressure). That makes piloting Ingenuity not just a first for aerodynamics, but a game-changer for offworld exploration as a whole.

Jezero Crater, as seen by the Mars Reconnaissance Orbiter (MRO). Image credit: NASA

Beyond getting into the record books, being able to fly around on Mars opens up a whole new class of possibilities for exploration. NASA is exploring new pathfinding and problem-solving methods and investing in off-world flight as part of its broad effort to get traction in the second great space race: the rush to commercialize low-earth orbit, and eventually to push the human sphere of influence outward into the greater solar system. Rovers are limited in their abilities by the fact that they have to contend with boulders, cliffs and crevasses, but NASA’s showing at DARPA’s 2021 rescue robot Olympics showed that the whole game changes if a robot can just use the Z axis to opt out of a terrain hazard altogether.

Off-World Cartography

In addition to being the literal pilot project for powered flight on another planet, Ingenuity has two different cameras on board that enable high-res imaging of its environment. One is a downward-facing black-and-white camera for navigation, and the other is a forward-facing 13-megapixel color camera with stereoscopic imaging capabilities. The ability to photograph the landscape and terrain with this kind of resolution and fidelity is important all on its own for reasons of navigation: we need to know just where Ingenuity is, because it would be terribly unfortunate to perform unplanned lithobraking, what with the communications delay between Earth and Mars. But between those two cameras, Ingenuity is also capturing enough information to make a high-fidelity 3D map of the Martian landscape. Here’s one such image, in stereoscopic red and blue 3D:

Depicted: the Martian rock formation named Faillefeu, located within the Jezero crater. This stereoscopic image should work with standard red and blue 3D glasses! Credits: NASA/JPL-Caltech

Between the stereoscopic 3D and the repeated imaging of the terrain at different times on different sols, the amount of data produced can be used to map the Martian landscape right down to the individual rocks. Not so long ago, we weren’t even certain there was water anywhere else in the cosmos. Now we have a small population of robots on another planet, looking for traces of the emergence of life, and squinting down into the soil to see about the water we’re almost certain is there. This is valuable for legitimate, very serious, and very important scientific reasons. It’s also just incredibly cool that anyone with internet access can watch videos beamed to us from the surface of another planet.


Because of how it aced its initial objectives, Ingenuity’s documentary mission got an extension in 2021. Until they’re decommissioned, Ingenuity and Perseverance will both be working in concert with the other landers, rovers, and probes we have surveying Mars. Percy and the MRO are already checking each other’s work concerning the evidence of water on Mars. The rover’s samples, including the rock cores it has already collected, will be stored in caches on Mars. A future Mars mission in cooperation with the ESA (European Space Agency) will eventually retrieve them.

As time goes by and these results continue to roll in, we’ll update this article. If you have questions about what Ingenuity and Perseverance have been up to on Mars, or what we’ve learned from their tenure there, do let us know in the comments — we will address them in a future update.

Between now and then, if you want to get involved with Perseverance and Ingenuity yourself, you can! Anyone interested is formally invited to dive headfirst into Percy’s photostream as part of the open-access AI4Mars project, which labels the terrain the images depict in order to help train SPOC, the navigation AI that guides our robots on Mars. (No, Sulu was the navigator… but I digress.)

If you’re feeling extra creative, you could even take a shot at improving the algorithm under the hood of AI4Mars itself. The project is committed to making its data and code freely available, both libre and gratis. “If someone outside JPL creates an algorithm that works better than ours using our dataset, that’s great, too,” explained Hiro Ono, the JPL researcher and AI expert who led the development of AI4Mars. “It just makes it easier to make more discoveries.”

Now Read:

November 17th 2021, 4:56 pm

Microsoft Will Make The Elder Scrolls VI a PC and Xbox Exclusive


Microsoft has been trying to expand the number of exclusive titles on Xbox to counter popular Sony properties like Spider-Man, Horizon, God of War, and more. Microsoft’s Xbox head Phil Spencer just confirmed they’ve got a big one planned. Following Microsoft’s acquisition of Bethesda parent company ZeniMax, The Elder Scrolls VI will only be available on Microsoft platforms. That’s not going to sit well with some gamers. 

ZeniMax sold to Microsoft in September 2020 for $7.5 billion, but that might end up being a bargain. By acquiring ZeniMax, Microsoft gained control over franchises like Doom, Fallout, and The Elder Scrolls (TES). Initially, Microsoft was careful about promoting Xbox exclusivity of its new property. Executives used phrases like “‘either first or better” to describe its Bethesda launch strategy. 

The Elder Scrolls is not the first exclusive to come out of the deal. The upcoming sci-fi epic Starfield was revealed earlier this year as an Xbox and PC exclusive as well. New games are one thing, but TES has a reputation for running on every single platform — Bethesda has released Skyrim so many times in so many places that there are memes about it. You can play Skyrim on the PS3, PS4, PS5, or Nintendo Switch, but the sequel won’t be there, reports Ars Technica. It will only be on Xbox and PC. 

As recently as late 2020, Bethesda execs were public in their belief that the new Elder Scrolls would come to non-Microsoft platforms. However, TES is a very popular franchise. TES VI is one of the few titles that might actually encourage gamers to go with an Xbox over a PlayStation. You can understand why Microsoft reversed course as it’s in danger of playing second banana to Sony two generations in a row. 

If Microsoft can make it to release without being eaten alive by angry gamers, this could be the beginning of a new strategy. The Xbox Series X is lagging far behind in sales to the PlayStation 5 while Microsoft owns some of the most popular game franchises in the world. It wouldn’t be unthinkable for Microsoft to start leveraging those other franchises as exclusive releases. Fallout, Doom, Wolfenstein, and many more could become exclusive to Microsoft. If they’re willing to do it with The Elder Scrolls, they’ll likely do it with anything. 

If you’re a fan of Bethesda franchises, you might not want to pull the trigger on that PS5. Unless you’re committed to one of Sony’s exclusives, the Xbox might serve you better. There’s always PC gaming, of course, but good luck finding a new video card. They’re still in short supply and hugely overpriced, and there’s no sign supply will loosen any time soon.

Now read:

November 17th 2021, 4:56 pm

Apple Announces Self Service Repair Program


The iPhone 6s battery.

No, it’s not April Fools. Apple announced today that starting in 2022 it will allow iPhone 12 and 13 owners to repair their broken phones with factory parts and tools sold by Apple. The company will even provide repair manuals on its website. This will allow users to identify the parts they need so they can order the parts required to repair the device. Once the repair is done, users can ship the defective parts back to Apple and receive a credit to their Apple account for the parts’ value. The program will start with only late-model iPhone parts, and in the US only, but eventually expand across the globe and include parts for its M1-based computers as well.

Apple notes the program will begin with the most commonly repaired parts for the iPhone, which include the display, battery, and camera. Interestingly, the company recently came under fire for an iPhone 13 design that disabled FaceID if the user replaced the display via a third-party as opposed to an official Apple repair shop. Apple backed down from that stance, perhaps in a move that now appears to be foreshadowing today’s announcement. The company notes it will offer more than 200 parts for sale on its upcoming web store, but so far it isn’t saying what pricing will be for the parts or the tools.

Details of the Apple Genuine Parts Repair program leaked in 2019, and look a lot like what Apple is announcing today, aside from the training part.

The move marks a rather surprising reversal from Apple, which has become known as one of the staunchest defenders of the “nobody can repair our products except us” ethos. For years Apple has been making its products not only more difficult to repair by using special screws, glues, and parts, but it has repeatedly resisted calls for the exact type of program it is launching today. Back in 2019 the existence of an Apple Genuine Parts Repair program was leaked to the press, making it appear now Apple has had the capability to launch such a program for quite some time now, and has finally relented, most likely simply to get in front of having its hand forced via legislation.

The latest move from Apple also comes on the heels of two recent developments. In March the FTC released a 55-page report regarding whether or not companies were treating consumers fairly by manufacture-imposed repair restrictions, concluding they were not being treated fairly, at all. Also, the US Copyright Office  loosened restrictions on “right to repair” activities in October when it allowed new exemptions to the Digital Millennium Copyright Act (DMCA), granting consumers the theoretical right to tinker with devices they own, as long as they’re not violating copyright protections put in place by the manufacturer.

Previously, Apple said it opposed this type of self repair program because it would let bad actors understand their products at a deeper level, thus allowing for more malicious activity to take place. It also said only people with the proper tools, training, and parts are qualified to do repair work on its products due to their intrinsically complicated design.

Surprisingly, one outspoken critic of Apple’s previously resistant stance was Apple co-founder Steve Wozniak, who touted his stance on the benefits of self repair in a Cameo video to the most outspoken “right to repair” advocate on YouTube, Louis Rossmann. In the video Woz states Apple wouldn’t even exist as a company if the type of restrictions on access to computer parts and tools that exist now were in effect back then.

Now Read:

November 17th 2021, 4:56 pm

Windows 10 Drops to One Annual Feature Update Moving Forward


Microsoft has started the staged release of Windows 11, and you can go out and buy a new Windows 11 PC right now. While Windows 10 will continue getting updates as promised, it’s natural to expect Microsoft won’t want to spend quite as much time on yesteryear’s OS going forward. Therefore, Windows 10 will only get one annual feature update instead of two, reports The Verge. Before you get too up in arms, this is the same schedule as Windows 11. 

Early in the Windows 10 era, Microsoft began rolling out semiannual feature updates, previously called Creators Updates. There have been “service packs” in some previous releases of Windows, but these only came out every few years. This practice supported speculation that Microsoft might never release another distinct version of Windows after 10, as did the long wait between Windows 10 and the newly released Windows 11. Perhaps it was Microsoft’s intention to keep updating Windows 10 at first, but it’s moving on with an OS that will only get one feature update per year. 

There’s still a pending feature update for Windows 10 in the works. After Microsoft releases that in the next few weeks, Windows 10 won’t get another substantive patch until late 2022. The upcoming version doesn’t add much. The most prominent feature is GPU compute for the Windows Subsystem for Linux (WSL). Windows 11 won’t get more updates, though. The plan is just to do a single yearly feature patch for Windows 11 in the future, and we probably won’t see the first of those for a while. 

Currently, Microsoft is focused on rolling Windows 11 out to existing compatible devices. Unlike Windows 10, Microsoft has opted to only support newer hardware with specific capabilities. For example, PCs need to have a trusted platform module and a CPU from the last few years to run Windows 11. You can install Windows 11 manually on supported devices, but if you decide to wait until Microsoft pushes it to your system, the prompt might not appear until the middle of next year. 

Even if you can’t (or don’t want to) run Windows 11, Microsoft won’t leave you in the lurch. Windows 10 will get security patches through October 2025, and those should be more frequent than once a year. Just don’t expect as many new features as we’ve been seeing thus far. With just three or four more major updates in the offing for Windows 10, Microsoft probably won’t want to add anything that’s going to need a lot of support in the future. If you want the latest and greatest, it’s probably best to hop on the Windows 11 bandwagon.

Now read:

November 17th 2021, 10:58 am

Apple is Sticking Taxpayers With the Bill for Its New Digital ID Service


(Image: Apple)
Earlier this year, Apple announced it would be rolling out an option that would allow iPhone users to store a digital version of their driver’s licenses on their devices. This week it became clear that Apple is expecting participating states to foot the bill for the costs associated with the service. 

Known unofficially as “digital ID,” the option will add virtual versions of users’ licenses, state identification, and other forms of ID to their Apple Wallet. While the main idea is to reduce physical clutter and allow users to always have a copy of their ID on them, the service may also help to reduce fraud; a physical ID card can be stolen, but Apple’s digital ID requires a successful Face ID or passcode entry in order for the card to be displayed. It’s the clear next step to a system that has allowed smartphone users to use their devices in lieu of their debit and credit cards when making purchases on the go. 

What wasn’t disclosed in Apple’s initial announcement, however, was that taxpayers would be stuck paying for the means to implement and maintain the digital ID service. According to CNBC, which obtained contracts between Apple and the four first states to use digital ID, Arizona, Georgia, Kentucky, and Oklahoma will use taxpayer dollars to support and even market the service. Among other things, states will need to pay to maintain the systems that issue and service credentials, hire project managers, and market the feature—all on the taxpayers’ dime.

(Image: Apple)

In exchange for this, you’d think states would maintain a decent amount of control over the day-to-day operation of the service. But this just isn’t the case. Despite shouldering the financial burden of the deal, Apple remains in the “driver’s seat,” with the tech giant closely overseeing all marketing efforts and project timelines. Each state’s contract states it’s required to “allocate reasonably sufficient personnel and resources (e.g., staff, project management and funding) to support the launch of the Program” and even designate a specific project manager for responding to Apple inquiries, if Apple requests it. States will also be required to provide customer support for any issues involving the service, not Apple.

While it may come as a surprise that Apple is asking government bodies to market its new feature, it makes sense, in a semi-dystopian sort of way. The more governments “normalize” the use of a product, the more likely other entities are to feel as though they might as well try it. This includes other government bodies, but it also goes for businesses that may need to view customers’ IDs, like hotels or bars. 

Digital ID will first be introduced in Arizona and Georgia, but neither state has launched their programs just yet. 

Now Read:

November 17th 2021, 10:58 am

NASA Report Says More Artemis Moon Landing Delays Are Inevitable


We’re coming up on fifty years since a human being has walked on the moon, and it’s going to be a little longer yet. NASA recently pushed the proposed 2024 Artemis moon landing to 2025, but a report from the agency’s inspector general says that won’t be the last delay. As things currently stand, the IG expects Artemis will end up being pushed back to 2026 at the earliest. 

The Artemis Program is a modern equivalent of Apollo, but whereas Apollo was about short-term lunar exploration, Artemis could help support a long-term human presence on and around the moon. The first step, however, is to develop a vehicle that can take astronauts to the moon and safely return them to Earth. That’s the Space Launch System (SLS), which has been the primary driver of past delays. 

The SLS is a non-reusable rocket, but it will be extremely powerful, expending all of its fuel to hoist the bulky Orion capsule into space. The rocket was nixed from the planning of the Europa Clipper mission, but it should still be ready for Artemis…eventually. NASA is shooting for a test launch in February 2022, but the IG says that is more likely to take place in summer 2022. There are, however, parts of the mission that could lag behind even more. 

According to the IG, the slow pace of work and insufficient funding for NASA’s new spacesuit program will delay Artemis. The best-case laid out by the report is that NASA gets the first suits ready for use in May 2025. That alone would make NASA’s new launch window unlikely, but there’s a whole separate development process to slow things down even more. For that, the IG turns to SpaceX. 

NASA has chosen SpaceX to develop the human landing system (HLS) for Artemis using its Starship vessel. Despite Blue Origin’s protests, SpaceX has reportedly already manufactured 20 prototype Starships and 100 Raptor engines that will eventually help people get from Orion down to the lunar surface. The Office of the Inspector General praises SpaceX’s rapid pace of development but notes that the HLS will almost certainly cause additional delays as NASA and SpaceX work to test and certify the vessel. 

At the end of the day, the OIG report estimates the crewed Artemis mission to the lunar surface could launch in 2026. By the end of 2025, the program could cost roughly $93 billion, much higher than the proposed $35 billion budget. Each of the first four launches will cost $4.1 billion. The report urges NASA to find ways to lower mission costs, but space isn’t cheap.

Now Read:

November 17th 2021, 10:58 am

Amazon Sued for Crash Caused by Driver Rushing to Make Deliveries


A lawsuit filed in Georgia state court seeks to answer the question of whether or not Amazon is liable for injuries caused by delivery companies it hires as contractors, according to a report from Bloomberg. Ans Rana was in the back seat of his brother’s Tesla when it was hit from behind by a speeding Amazon delivery driver employed by a contractor named Harper Logistics, leaving Rana with life-changing injuries.

The heart of the case is about how much control Amazon exerts over the thousands of contractor companies it uses to deliver its packages every single day. Naturally, Amazon says it’s not liable since the driver was employed by a contractor, but Ans’ attorney, Scott Harrison, argues it is Amazon who is running the show. For example, according to the lawsuit Amazon contractor vehicles are outfitted with a bevy of sensors and cameras to monitor every aspect of the drivers’ experience, including, “backup monitoring, speed, braking, acceleration, cornering, seatbelt usage, phone calls, texting, in-van cameras that use artificial intelligence to detect for yawning, and more.” Amazon also allegedly dictates to the contractors how many packages must be delivered per 10-hour shift, and the suit alleges drivers are monitored in real-time as well. If they fall behind schedule, an Amazon employee will alert the contractor with a text message, encouraging them to “rescue” the driver.

Footage of the Amazon delivery driver as it is about to crash into the stopped car, taken by the Tesla’s rear-facing camera (Photo: Ali Kamran via Bloomberg)

To help decipher exactly how much control Amazon exerts in these situations, Mr. Rana’s legal team wants more information about how Amazon manages its fleets of contractor vehicles. Amazon argues this information is proprietary, and exposure in court would represent a risk to what it deems are protected trade secrets. Still, as Bloomberg notes, a victory against Amazon is far from ensured as these types of cases have gone both ways in the past, and laws vary from state-to-state as well. The suit certainly introduces a lot of interesting questions about where the line exists between contractor and employer liability, however.

Also, this case could have a profound impact on not just current logistics operations for Amazon but future incidents as well. According to Bloomberg, Amazon has been a defendant in 119 motor vehicle injury lawsuits this year alone, in 35 states. An Amazon spokesperson told Bloomberg it has invested over $1 billion dollars in training and technology to enhance safety for its delivery drivers, including outfitting more than half its US fleet with video cameras that alert drivers in real-time to enhance safety. The alerts include driving without a seatbelt, distracted driving, signal and stop sign violations, and more.

One interesting note about the case is the Department of Transportation seemingly requires notification for accidents of this nature when the vehicle weighs over 10,000-pounds, which the Amazon vans do not quality for, thus they don’t have to be officially reported. Amazon also outsources a lot of its delivery operations to contractors who lease the vehicles and also have to maintain their own insurance, explicitly for incidents such as this.

On a related note, a law firm in the UK has also filed suit against Amazon in October, arguing that the micro-level control it exerts over its contractors should qualify them as Amazon employees. The same law firm gained notoriety for suing Uber over how it classified its “self-employed” drivers in February of this year.

Now Read:

November 16th 2021, 6:42 pm

Nvidia Announces Open Source Image Scaling, New DLSS and a Free ICAT Tool


It’s a big news day for Nvidia, as the company has unleashed a barrage of updated software, including some free tools that could prove to be quite useful for a lot of people. At the top of the list is an updated version of DLSS (Deep Learning Super Sampling), followed by the news that its Nvidia Image Scaling (NIS) technology is going to be open source so it could work with any GPU, and last but not least it’s releasing ICAT, which stands for Image Comparison and Analysis Tool. Let’s dive in.

Nvidia’s DLSS has been updated to version 2.3, which according to Nvidia, “…makes even smarter use of motion vectors to improve object detail in motion, particle reconstruction, ghosting, and temporal stability.” The following games will include support for the latest version of DLSS:

Additionally, 10 new games are receiving support for DLSS, including the recently launched Battlefield 2042, and Grand Theft Auto: The Trilogy.

Moving onto what is arguably the biggest news, Nvidia has updated its Image Scaling technology and will make it open source going forward in order to compete with AMD’s FidelityFX Super Resolution (FSR). This means it could work with any GPU theoretically, and the SDK is available on Github. The big changes for Nvidia card owners is there will now be a slider you can adjust via GeForce Experience, including the option to make changes in-game via the overlay. Pressing Alt-F3 in-game will let you see in real-time the adjustments it makes. Alternatively, you can change the scaling setting globally or per-game in the Nvidia Control Panel.

The newest GeForce driver lets you customize image-scaling per-game or globally.

Nvidia says it’s updated scaling tech has, “…a new algorithm that uses a 6-tap filter with 4 directional scaling and adaptive sharpening filters to boost performance. It sharpens and scales in a single pass, so is very efficient.” Nvidia also included a comparison of the various upscaling technologies for the game Hired Gun showing 4k, 4K FSR, 4K NIS, and with DLSS.

Scaling tech compared on an RTX 3060, AMD 59590X, and 32GB of RAM.

Finally, Nvidia is releasing ICAT for pixel-peeping screenshots to compare image quality, which you can download here. According to Nvidia, “ICAT allows anyone to easily compare up to 4 screenshots or videos with sliders, side-by-sides, and pixel peeping zoom-ins. Align comparisons spatially and temporally, examine the differences, and draw your conclusions.”

Editor’s Note: I’ve spent some time with ICAT since Nvidia announced the program. It’s a tool for easy A/B comparisons between two different versions of an image or video. There’s a useful online service called imgsli that allows you to upload two different versions of the same image to compare them easily with a slider. ICAT is a more sophisticated version of the same concept with the ability to handle video, not just images. —JH

All of these new technologies are available today, either via the Nvidia website or with the updated GeForce driver, version 496.76. The new Image Scaling features are available via GeForce Experience version

Now Read:

November 16th 2021, 6:42 pm

New Drug Class Reverses Paralysis in Mice


(Photo: Josh Riemer/Unsplash)
A new study out of Northwestern University has revealed that a novel class of drugs may have the power to reverse paralysis.

In the latest issue of the journal Science, researchers shared that the new drug was injected into tissue surrounding the spinal cords of paralyzed mice. The drug then got to work on regenerating parts of neurons within the spinal cord, diminishing the presence of scar tissue, protecting motor neurons, and forming blood vessels to deliver nutrients to damaged areas. Myelin, which acts as the nervous system’s electrical tape to facilitate efficient signaling, was also found to have reformed around cells at the injury site. After four weeks, the mice were once again able to walk.

The drug works by creating a network of nanofibers that imitate the spinal cord’s extracellular matrix. This structural duplication allows the injectable drug to communicate with cells within the mouse’s central nervous system and engage with cellular receptors. Once those receptors are engaged, the drug triggers two signals. One encourages axon growth on neurons within the spinal cord, while the other promotes the regeneration of lost blood vessels that are vital to neurons’ vitality, as well as the body’s ability to conduct tissue repair. 

“Our research aims to find a therapy that can prevent individuals from becoming paralyzed after major trauma or disease,” said Samuel I. Stupp, a materials scientist and professor at Northwestern who led the study. “We are going straight to the FDA to start the process of getting this new therapy approved for use in human patients, who currently have very few treatment options.”

Almost 300,000 Americans live with a spinal cord injury that has resulted in some form of paralysis. The prevalence of paralysis has had researchers working toward a cure for years, but the central nervous system has an extremely limited capacity for repair. Those dealing with paralysis have traditionally been left with anti-inflammatory medications and physical rehabilitation as treatment options, but the former is more of a band-aid solution (anti-inflammatories assist with pain but don’t help resolve paralysis) and the latter can take years to take effect. 

Though Northwestern’s new drug has only been used on mice so far, it’s a hopeful development in a field of research that has seen a dismaying number of dead ends over the last four decades. Stupp also believes the drug may have the potential to be used on other targets, such as brains impacted by strokes and neurodegenerative diseases like ALS and Parkinson’s. 

Now Read:

November 16th 2021, 6:42 pm

Nvidia Confirms Some GeForce Now Games Are Capped Below 60 FPS


You have plenty of cloud gaming options at your disposal these days, from Microsoft’s cross-platform Xbox Game Pass to Google’s Stadia platform. There’s also Nvidia’s GeForce Now, which the company has been tinkering with for the better part of a decade, but it’s still not a perfect experience. Case in point: gamers have noticed that Nvidia caps frame rates for some games below 60 fps, even for those on the most expensive service tier. 

GeForce Now, Stadia, Amazon’s Luna, and all the other cloud gaming products have their own distinct hardware platforms and features, but the basic functionality is the same. Instead of rendering a game locally with a game console or a GPU, a server does the rendering and streams the video to you. Your control inputs go back up to the server, and provided your connection is fast enough, you’ll be able to play the game normally. Although, there are limitations on how much detail cloud gaming can deliver right now. 

A Reddit user who goes by /u/LizzieLovesDaGlizzy recently noticed that some titles on GeForce Now felt a little sluggish. After contacting support, they were provided with a link to a recently posted knowledge base article in which Nvidia points out that some “graphics-intensive” games are locked below 60 fps. There are 12 games on the list right now, and some of them don’t seem very graphically advanced. It doesn’t even matter if you’re subscribing to Nvidia’s new RTX 3080 plan, you still can’t hit 60 fps on these games. 

For example, Path of Exile is a free-to-play RPG with a lowly GTX 1050 as a recommended GPU, and Dying Light, which launched in 2015, only lists a GTX 780 as the recommended hardware. Those games are both locked at 50 fps, but some other titles are even lower. Cyberpunk 2077 is an undeniably beefy game — even powerful gaming PCs have trouble running it well. You might expect that Nvidia’s 3080-powered servers could handle it, but no, Cyberpunk is capped at 45 fps. 

For most games, 60 fps is seen as the ideal starting point for a smooth experience. Some people have higher refresh monitors or TVs, and there are even displays that scale the refresh rate using G-Sync or FreeSync technology. If you don’t have one of those, though, you might notice more judder in these games due to the fps being locked just shy of 60. It’s unclear if Nvidia considers these frame rates just a fact of life or if future improvements in GeForce Now could boost them to an even 60 fps. For the time being, you should take a peek at Nvidia’s page before you assume a new game will support Nvidia’s advertised 1080p/60 claims.

Now Read:

November 16th 2021, 11:27 am

First Mover: Intel’s 4-bit 4004 CPU Turns 50


Thomas Nguyen, CC BY-SA 4.0

50 years ago today, on November 15 1971, Intel launched the 4004 and, by extension, the modern computer age. The 4004 was the first commercially-produced microprocessor, or CPU. It was an enormous technical achievement for Intel, and the design team, which was led by Federico Faggin, broke new ground in several ways.

Intel hired Faggin to solve a problem: It had negotiated a deal with the Japanese calculator company named Busicom, in which Intel was to produce a four-chip calculator design for upcoming Busicom products. The problem was, nobody at Intel could actually build the CPU. At the time, Intel was principally a memory company. Faggin was briefed on the behind-schedule Busicom project on his first day. In his own account of the 4004’s development, published back in 2009, Faggin wrote:

Stan also told me that Shima was arriving in a few days to check on the progress, expecting to find the logic design of the CPU completed and the other chips in an advanced state of design. The problem was that since late 1969 no work had been done on the project, and Busicom was not told about it.

When I saw the project schedules that were promised to Busicom, my jaw dropped: I had less than six months to design four chips, one of which, the CPU, was at the boundary of what was possible; a chip of that complexity had never been done before. I had nobody working for me to share the workload; Intel had never done random-logic custom chips before, and, unlike other companies in that business, had no methodology and no design tools for speedy and error-free design.

Faggin goes on to note that people at Intel like Andy Grove, “…considered my project a diversion dreamed up by the marketing guys to make some money while waiting for the memory business—the real mission of Intel—to mature.”

The 4004 block diagram. Image by Appaloosa, CC BY-SA 3.0

Despite these difficulties, Faggin delivered the CPU by December of 1970. The 4004 was built using then-new Metal Oxide Semiconductor (MOS) silicon gate technology. It contained 2300 random logic transistors, with a basic instruction cycle time of 10.7us, or roughly 92,000 instructions per second. While this level of performance doesn’t exactly turn heads today, the 4004’s ability to deliver that kind of performance in a four-chip design was a breakthrough in 1971.

Even the chip’s naming convention was new. Faggin began a new method of naming, grouping the 4001 (ROM), 4002 (RAM), 4003 (Shift Register) and 4004 (CPU) in sequence, to show they were part of the same product family. Intel’s 4004 would be replaced by the 8008, 8080, 8085, and, eventually, the 8086. Faggin would go on to found Zilog, an early competitor (to who?) whose Z80 CPU was popular in the late 1970s and early 1980s. Zilog focused on the microcontroller market rather than chasing Intel into the high performance microprocessor business, but the company is still in business today.

As for the 4004, it’s not directly related to the x86 CPUs we use today, but one might think of it as an australopithecine to our own Homo sapiens. While Intel’s “8008” moniker was chosen to build directly off the success of the 4004, the 4004 and 8008 designs are not related and it was the latter that lead to the x86.

Chips like the 4004 and the 8008 (as well as various CPUs from Intel’s competitors like the somewhat later 6502) demonstrated that there was a burgeoning market for small microprocessors that were nowhere near as powerful — or as expensive — as the mainframe-class hardware available at the time from companies like IBM. Intel would eventually shift away from memory and pour more resources into CPU development.

Looking back at the 4004 emphasizes just how far we’ve come. The 4004 ran at 750KHz. It shipped at a time when IPC (Instructions per Cycle) was more generally discussed as CPI — Cycles per Instruction, and it took a minimum of ~8 clock cycles to process a single instruction. It can be hard to remember there was a time when a calculator took more chips (and more silicon area in total) than what is now required to power a modern iPhone or Android device. In less than a single human lifetime, we’ve increased clockspeeds by ~6000x, to say nothing of the improvements to IPC, power consumption, and die area.

Happy 50th birthday to the 4004 and the microprocessor revolution.

Now Read:

November 15th 2021, 6:55 pm

Apple Reportedly Buying Ads for High-Value Apps To Compensate for Epic Ruling


Apple got rid of TouchID with the iPhone X, leaving users with passcode entry and FaceID. (Photo: Miguel Tomas/Unsplash)

Apple and Epic Games duked it out in court recently, and Apple came away with an almost total victory. But Apple isn’t accustomed to not getting its way, and it’s not taking that minor loss laying down. With Apple forced to let developers direct customers to non-Apple payment options, the iPhone maker has resorted to buying ads for high-value apps that send potential customers to its App Store, so it can keep getting its cut of subscriptions, reports Forbes

Epic’s lawsuit against Apple was officially about Fortnite, but Epic sought to make it a referendum on the entire App Store model. According to Epic, allowing Apple to rule over iOS software with an iron fist is a violation of antitrust regulations. The court disagreed, handing Apple a major win… except when it comes to payments. The judge ruled that Apple can’t prevent developers from encouraging their users to buy products and subscriptions from non-App Store platforms. That deprives Apple of its revenue cut, which can be as high as 30 percent. 

As the date of the change approaches, some popular developers are complaining that Apple has started advertising their apps. That might sound like a good problem to have, but Apple has an ulterior motive. By buying up ads for apps like HBO Max, Tinder, and Masterclass, it can make sure it continues to get the App Store cut. Apple has reportedly been doing this for about two years, but it’s ramping up as the required changes go into effect. 

It all started when Epic tried to sell V-bucks in Fortnite without paying Apple its cut.

Apple has publicly expressed its displeasure at being forced to make even this small change. It has attempted to get the ordered suspended, but a judge last week ordered the company to comply by December 9th. Apple, meanwhile, contends that this will be the first time it has allowed developers to direct users to an outside link, and it needs time to decide how to proceed. However, it seems Apple really just wants time to figure out how it’s going to compensate for the lost revenue. Apple, for its part, claims that it has been running App Store ads for years, and it doesn’t see any problem with it.

There’s nothing to stop Apple from advertising the apps in the App Store, and it might help some developers in the long term. However, it reduces the number of customers who could end up buying things directly from the developer now that such things are possible, and it makes ads more expensive if the developer wants to compete. Epic’s lawyers would probably point to this as further evidence that Apple behaves in an anti-competitive fashion. Regardless, the courts have been unwilling to reign in Apple’s excesses thus far.

Now Read:

November 15th 2021, 4:56 pm

Microsoft Updates Windows 11 to Ignore Browsers Other than Edge for Certain Links


Microsoft is up to its old tricks again, again. A new update to its Windows 11 operating system makes it impossible to circumvent its built-in web browser, Microsoft Edge, for specific links such as those found in the Start Menu, Widgets, etc. Previously, a tiny app named EdgeDeflector provided a workaround for this behavior, but according to its author’s blog, a new build prevents his app from working entirely.

If you’re not familiar with EdgeDeflector, it is an app that used to be allowed to be set as the default app for links with the header of “microsoft-edge://,” which are links within the Windows Shell that would open in Microsoft Edge. The app just re-routed those links to https://, so they would open in whatever you had set as your default browser. If you’re not sure what yours is set to, simply copy this link — microsoft-edge: — and type Windows Key+R, then paste it in the Run command, and hit Enter.

Previously you could change links associated with Edge to EdgeDeflector with a simple click.

According to the software’s author, the new Windows Insider Preview build #22494 defeats his app completely. There are no workarounds or registry hacks you can employ to get certain links to open in any browser other than Edge, period. This link behavior is paired with Microsoft adding widgets and “web experiences” to Windows 11 that will automatically open links in Edge, and if you search for anything in the Start Menu, any link that appears will also only be allowed to be opened in Edge. Also, as you might have experienced previously, once you open Edge the software will often  nag you to make it your default browser (take me back, please!). Anecdotally, we can recall installing mundane updates to Windows 10 that put a link to Edge back in our Taskbar, so suffice to say this behavior from Microsoft will come as a surprise to exactly nobody.

This is on top of Microsoft making it more difficult to permanently change the default browser in Windows 11. If you don’t set a new file association to “always use this app” the first time you run it, you will have to manually change every single file association for web links to get them to not open in Edge. There’s a pattern here, obviously, and as the software author notes, it hearkens back to the olden days where Microsoft was taken to task for promoting its Internet Explorer web browser over offerings from its competitors.

As far as what the future holds for EdgeDeflector goes, it’s looking pretty bleak. The author writes, “There are still ways I can work around the limitation, but every method left in my toolbox will require making destructive changes to Windows. Changes that can cause issues for the user down the line, and issues that I frankly don’t want to support.” No further updates to the software will be forthcoming until Microsoft reverses its position on the issue, which is unlikely. However, to be fair to Microsoft, a lot of users do actually enjoy using Edge now that’s Chromium-based, and also the company just completely reversed course by letting people install games purchased from its stores wherever they want, so we’ve seen Microsoft listen to its customers. Perhaps they’ll do it again here (but probably not).

Now Read:

November 15th 2021, 4:56 pm

Boeing Accepts Liability for Ethiopian Airlines Crash


(Photo: Boeing)
Two and a half years after a plane crash involving one of its 737 MAX 8 jets, Boeing has accepted full liability for the Ethiopian Airlines tragedy which claimed 157 lives.

In a US District Court filing from last week, Boeing accepted full responsibility for the crash that unexpectedly befell Ethiopian Airlines Flight 302 in March 2019. The filing goes so far as to absolve the pilots (and any other individuals aboard the flight) of responsibility. Boeing’s acceptance of liability opens the door for victims’ families to finally negotiate compensatory damages. Of course, given that this is a legal agreement with a very large company, there is a catch: victims’ families will be prohibited from seeking punitive damages from MAX supplier Rosemount Aerospace, as well as its parent company, Rockwell Collins.

(Photo: Boeing)

Though the filing is a positive development, Boeing didn’t accept liability without a fight. The judge overseeing the Ethiopian Airlines case has been ordering Boeing for months to turn over documents relating to its Maneuvering Characteristics Augmentation System (MCAS), as it’s widely believed that an MCAS failure led to the crash. (It was also discovered shortly after the crash that Boeing never installed two potentially life-saving safety features on the 737 involved, as the company preferred to sell those features as add-ons for extra profit.) Boeing resisted sending over the requested documents, saying many of them were irrelevant and that pulling all of them would involve a hefty 350,000 engineering hours. The company’s excuses didn’t stand up in court, however, and Boeing eventually turned over the documents. Boeing accepted liability for the crash shortly afterward.   

Boeing has understandably struggled with PR since the Ethiopian Airlines crash made global headlines. In a slew of internal emails leaked last year, employees told one another they wouldn’t put their families in Boeing MAX aircrafts, calling the development of the 737 MAX a “shitshow” and questioning whether the jet would meet Federal Aviation Administration regulations. To make matters worse, the Ethiopian Airlines crash was the second Boeing 737 MAX to fail within the span of six months—Lion Air Flight 610 had met the same unfortunate end the previous November, leading the entire world to wonder why on earth Boeing hadn’t parked its 737s the first time. The judge on the Ethiopian Airlines case is the same judge who continues to oversee negotiations related to the Lion Air crash. 

For better or for worse, Boeing’s 737 MAX jets were tested for recertification in 2020 and are flying once again. 

Now Read:

November 15th 2021, 3:26 pm

Rumor Mill: Next-Gen AMD and Nvidia GPUs Will be 500W Monster Cards


Credit: Laura Ockel/ Unsplash, PCMag

(Photo: Credit: Laura Ockel/ Unsplash, PCMag)
According to a Twitter account that posts rumors about upcoming tech, the next generation of GPUs from both AMD and Nvidia are going to be unlike anything we’ve ever seen before, both in terms of raw power and power consumption. The Twitter user @greymon55 has listed possible specs for what could be upcoming GPUs from both camps, and though they are obviously just rumors at this point, they point to what could be a massive leap forward for both companies, with the flagship cards effectively offering more than twice the power of today’s tentpole products.

(Photo: greymon55 on Twitter)

Starting with Nvidia, the tipster indicates the upcoming Ada Lovelace AD102 GPU will be built on a 5nm process from TSMC, which is a big change since its current Ampere GPU lineup was produced on Samsung’s 8nm process. TSMC’s 5nm process is responsible for the impressive Apple SoCs used in both its iPhones and its new M1 Pro and Max silicon. It should be packing a whopping 144 streaming multiprocessors, which compares to 82 in the current RTX 3090. It could also have 18,432 CUDA cores, which is a 54 percent increase. This rumor lines up with previous rumors from a year ago from a different poster. It should retain its 384-bit memory bus width, and will also continue to use GDDR6X memory. Core clock speeds could hover in the low 2GHz-plus range, but this one area where rumors have fluctuated. The newest leak suggests Nvidia could be able to crank the clocks up to 2.3GHz, resulting in FP32 performance that is simply jaw-dropping, with it achieving between 85 and 92 Teraflops. The RTX 3090 is only theoretically capable of 35 Teraflops, so it would be a gigantic leap forward.

Of course, all that power also means the card will be an energy-guzzling behemoth, most likely. This particular leaker suggests AD102 could consume somewhere between 450-650w at peak, and we’ve heard similar rumors that the card could end up somewhere around 500w. This is also a pretty large jump from the current maximum, which is 350w for the RTX 3090. The previous generation of GPUs all maxed out at 250w, generally speaking.

(Photo: greymon55 on Twitter)

In the AMD camp, the rumors are heating up as well, pardon the pun. Dubbed RDNA 31 aka the RX 7900 XT, the next flagship GPU from AMD is switching from a monolithic design (which Nvidia will apparently be sticking with for Ada) to a Multi-Chip Module (MCM) layout that’s similar to what it’s currently using in some of its Ryzen and Epyc CPUs. According to WCCFTech, AMD will be dropping Compute Units (CU) in favor of Work Processor Groups (WGP), and the MCM will feature a Graphics Core Die (GCD) and a Multi-Cache Die, the former coming from TSMC’s 5nm line, and the latter from its 6nm process. With two GCDs per card, it equals 7,680 cores per GCD, and 15,360 cores total, which compares to 5,120 “streaming processors” in the current RX 6900 XT, but with such a radical change in architecture, comparisons between new and old are becoming less meaningful.

Rounding out the specs will be up to 32GB of GDDR6 on a 256-bit interface, 256MB of Infinity Cache per module for a total of 512MB of on-die “3D” cache, and clock speeds hovering in the mid-2GHz range. This could allow the GPU to achieve a theoretical FP32 performance of around 75 Teraflops, which is lower than the rumored power of Nvidia’s Ada card, but FP32 prowess doesn’t directly translate into gaming performance. Finally, the rumor monger says they have no data about total board power, but they think it’ll be between 350-550W.

To summarize, we know these are just rumors and they could be wildly off the mark, but rumors like this are all that has been coming through the transom for some time now, and they match leaks from almost a year ago as well, so our anticipation level is definitely high for these GPUs. Also, the most interesting question these GPUs introduce is one OP wrote on twitter, “Double performance, double power consumption, can you accept it?” Our answer? Oh, hell yes we can.

Now Read:

November 15th 2021, 3:26 pm

Intel 12th Gen Alder Lake Non-K CPU Specs Leaked


So far Intel has only officially released the “K” versions of its 12th Gen Alder Lake CPUs (review incoming, btw), which are 125w high-end “unlocked” models designed for overclocking. For those who are more interested in the 65w lower-end parts, it’s been a waiting game so far, with no word from Intel on when it would spill the beans on its “regular” CPUs designed for the masses. Luckily for us, those beans have now spilled into the light of day thanks to Dell and other system integrators, which have posted the very likely embargoed information seemingly by accident. What’s really interesting this time around is it appears Intel will be selling lower-end CPUs without its integrated graphics, which is a bold-yet-questionable move from the tech giant. However, as Tom’s Hardware notes, these specs should be taken with a huge grain of salt as the launch is still months away and it’s likely the specs aren’t finalized.

On a manual page for Dell/Alienware’s Aurora 13 desktop, there’s a chart showing all the Intel CPUs that are compatible, and it lists quite a few Intel non-K CPUs that we’ve been curious about. For starters, there’s two versions of the Core i9-12900, which has the same 8P+8E core count as the K version, but with slightly lower clock speeds, allowing it to become a 65w CPU. There’s a standard version, and an “F” version, both with a total of 24 threads including both Performance and Efficiency cores. But whereas the K version has a boost clock of 5.2GHz, the non-K only goes to 5.1GHz, and the F version stops at 5GHz, possibly.

Another eyebrow-raiser is the non-K Core i5-12600, as this range of CPUs have been a staple on the “performance for the price” podium for many years. In the Dell chart it is listed as having just six Performance cores with Hyperthreading, with no E cores for efficiency whatsoever. This is a big departure from the higher-end K chips, which feature a hybrid design of Performance (P) and Efficiency (E) cores, and it’s also Alder Lake’s “game changing” feature. This tracks with Tom’s Hardware’s reporting that the lower-end chips will all feature Alder Lake-6P dies, which is physically smaller than its big brothers, and also lacks the Gracemont efficiency cores.

Down at the bottom of the range we have the the Core i5-12400F, which is also a 12 core, 65w CPU, but with a very conservative 2.5GHz base clock, and an unknown boost clock.

As far as when these CPUs will officially be announced or become available to purchase, it’s all speculation at this point, but rumors point to CES 2022 as an appropriate time to launch since it will have given its high-end chips time to shine all by themselves in the market place. Also, if you’re into the whole brevity thing, Videocardz has put all the known leaks into handy charts you can examine right here.

Now Read:

November 15th 2021, 9:56 am

Rockstar Pulls GTA Remastered Trilogy from PC Due to Authentication Outage


Rockstar was supposed to be riding high on the release of the remastered Grand Theft Auto: The Trilogy right now. Alas, it was not to be. The publisher’s decision to make the character models look like a low-fi version of The Sims was unpopular, but the complete collapse of Rockstar’s online authentication system is even worse. With no way to confirm ownership on PC, gamers can’t even hold their noses and enjoy the GTA carnage. As a result, Rockstar has pulled the PC version from sale on the company’s website

I know what you’re thinking: how could Rockstar screw this up? GTA 3, GTA: San Andreas, and GTA: Vice City are some of the most beloved games of the last 20 years. The unstoppable march of technology also means almost every device can run these games quite easily in 2021. These titles predate the always-online world in which we currently live, but Rockstar decided to shoehorn its online Rockstar Games Launcher into the release. The first time you launch the games, they have to authenticate. No authentication, no game. 

According to Rockstar’s tweets, it has suffered an outage of the servers that manage the launcher. That means any game that requires that system to authenticate won’t work, and that includes the brand new Grand Theft Auto Trilogy. However, as games from the early 2000s, these titles don’t have any online features — Rockstar didn’t bother to support the two-player mode from San Andreas or even something as simple as an online leaderboard. So, the only online service is authentication, and it’s broken. 

The outage started on Thursday (November 11th), and it has continued for more than a day. As of this posting, you still can’t buy the Remastered Trilogy on PC. Going to the official page simply says the title is temporarily unavailable. Naturally, people who did buy the trilogy in spite of the graphics are upset they can’t play the games. And if you think you could just pick up a classic copy of the games, think again. Rockstar pulled the original versions from all digital storefronts in advance of the remastered release. Oh, you already own them? Well, they also need the Rockstar Games Launcher, so they don’t work, either. 

Rockstar has not given any indication when the launcher will be online again. Console versions of the GTA Trilogy are unaffected by the outage. In addition, previous installs that have been authenticated should still work for now. Regardless, this has upset a lot of fans who were already iffy on the Remastered Trilogy. Rockstar has truly squandered an opportunity here. Maybe we shouldn’t be surprised — there’s been a run of disastrous remastered releases lately. 

Now read:

November 12th 2021, 6:41 pm

This Startup Wants to Throw Satellites into Orbit With a Giant Centrifuge


Getting into space is hard, and one of the reasons is that it takes a lot of energy to break free of Earth’s gravity. So far, the only way we’ve found to do that reliably is with rockets, but a startup called SpinLaunch has something else in mind. Using a giant vacuum chamber and a rotating hypersonic tether, the firm hopes to essentially throw satellites into orbit. SpinLaunch has just completed its first kinetic test launch by heaving a vehicle high into the atmosphere. 

The key to SpinLaunch’s vision of the future is the Orbital Accelerator, a massive centrifuge that can accelerate a small vehicle to thousands of miles per hour. The company reaches these incredible speeds by running the centrifuge inside a vacuum chamber to reduce friction. When the hypersonic tether reaches the appropriate speed, it detaches from the vehicle in less than a millisecond. That sends the payload up through a chimney-like tube atop the accelerator. If all goes to plan, the system could send about 440 pounds (200 kilograms) into space, and almost all of the energy used is electric. 

The catch, however, is that the Orbital Accelerator doesn’t exist yet. SpinLaunch has started with a smaller version, which is still pretty massive. At 165 feet (about 50 meters), the Suborbital Accelerator is taller than the Statue of Liberty, and it works. SpinLaunch succeeded in flinging a test vehicle up and out of the vacuum chamber and tens of thousands of feet into the air. That’s a great start!

By starting small, SpinLaunch is able to validate its aerodynamic models and then scale-up. The first kinetic launch only used about 20 percent of the system’s total power, too. Eventually, SpinLaunch plans to have a full ecosystem of space technology for the Orbital Accelerator. While the payloads won’t need rockets to leave the atmosphere, the vehicle will still have a small rocket motor inside the aeroshell to help position it in orbit. However, the fuel for that weighs very little. In a traditional rocket launch, most of the initial mass is fuel. The Orbital Accelerator has the potential to be a very inexpensive way to get small satellites into orbit—even cheaper than a reusable rocket like the SpaceX Falcon 9. 

One potential issue is that to throw a few hundred kilograms into space, the hypersonic tether needs to move extremely fast. The payload will therefore be subjected to extreme centrifugal force. SpinLaunch says it will be “easy” to optimize payloads for kinetic launch. There’s not a lot of room for error, though. Even a small issue could tear the Orbital Accelerator apart if it occurs at launch speed. SpinLaunch isn’t the first company to explore this concept, but it is the first to actually build it. Hopefully it’s onward and upward from here on out.

Now Read:

November 12th 2021, 4:23 pm

NASA’s Parker Solar Probe Is Being Pelted by Tiny Plasma Explosions


NASA’s Parker Solar Probe is built for extremes. It’s the fastest spacecraft they’ve ever built, and it orbits closer to the sun than any other artificial object. As it spirals inward for ever-closer views of our local star, it also encounters specs of space dust. Traveling at such incredible speeds (it reached 244,255 miles per hour on a recent pass) makes these impacts more powerful than the word “speck” would lead you to believe. In fact, NASA now says the hypervelocity dust impacts cause tiny plasma explosions that are detectable via the probe’s sensors. But don’t worry—it’s probably going to be just fine. 

Parker launched in 2018 fully prepared for harsh conditions. It sports a sun shield made of a carbon-fiber-reinforced carbon composite (sometimes called reinforced carbon-carbon). The probe orients itself for each pass of the sun so the shield can keep the instruments from being roasted. Parker already has the record for closest approach to the sun, but it’s going to continue breaking that record through 2024 when it will be just 4 million miles (6.2 million kilometers) from the surface. 

It’s currently rocketing around the sun at upward of 160 kilometers per second, and it’s only going to get faster. By the time it reaches that closest solar approach, Parker will be moving at nearly 200 kilometers per second—about 447,000 mph or 720,000 kph. Already, this is having consequences for the probe. While it does not have a dust detector, Parker has an electromagnetic field sensor known as FIELDS and an optical camera called WISPR. Both of these instruments have seen evidence of particles impacting at high speeds.

NASA reports that some of the hypervelocity dust impacts impart so much energy that they cause tiny plasma explosions on the surface of the probe. The dust might only be a few microns across, but the energies involved are huge at these speeds. NASA recently noted that the star tracking cameras were occasionally disrupted by these impacts. Parker needs those cameras to remain oriented correctly, so NASA started investigating. The team discovered the plasma clouds with the help of the FIELDS instrument. WISPR has also spotted some reflective debris from larger impacts on the body of the probe. While most impacts are too small to affect the probe, NASA did identify about 250 high-energy impacts. 

Luckily, NASA built Parker to be robust. Hypervelocity dust isn’t enough to damage the hardware, and even slightly larger particles are unlikely to break anything upon impact. It can also maintain orientation if one of its data sources, like the star tracking cameras, temporarily drops out. Currently, NASA expects Parker to cope with the increased rate of plasma bursts as it gets closer to the sun in the next few years. Data gathered during this time could also help create better models of the environment around the sun, making future missions more predictable.

Now Read:

November 12th 2021, 12:39 pm

Windows Store Will (Finally) Let You Choose Where Games Are Installed


PC gamers using the Windows Store and Xbox Game Pass for PC, rejoice. Your long national nightmare is over. Microsoft has announced in a PC Community Update video that it’s bringing some huge and very welcome changes to gaming on the Windows platform. Chief among them is the ability to choose where games downloaded from Xbox Game Pass for PC and the Microsoft Store will be installed, as well as the ability to do whatever you want with the files. Previously Microsoft usually put games in a locked folder.

The update, which will be available exclusively to Xbox Insiders soon, is expected to roll out to the rest of the customer base in the near future. Not only will it allow you to change the location of where games are installed, you can also install any game in any location, including on a different drive than C:/, which can be helpful if you’re the type of person who likes to keep important data off your Windows partition since it sometimes needs to be nuked and reinstalled. In addition to custom install locations you will be able to see which games allow mods, and access the files themselves with no limitations, so you can swap save files with friends, edit config files, and do what users of other game downloading services have been doing for quite some time now.

The update will also let users decide if they want Windows 11’s Auto-HDR feature to be enabled or not, as some users might prefer playing their older games in their original, non-HDR glory. Also, according to The Verge, which broke the story hours before Microsoft released its video, you will also be able to repair games, verify files, and make backups of your folders as well, which is another extremely welcome update.

On the surface this seems like one of those changes that should have been made ages ago, as it’s such a simple thing to do, and Microsoft’s competitors in this area have let you perform these rudimentary tasks since forever. We applaud Microsoft for listening to its customers and actually giving them exactly what they want. A revolutionary concept, we know.

As far as when it will be available for the masses, Microsoft has not confirmed those details as of this writing, but surely millions of gamers around the world are patiently waiting for it with bated breath.

Now Read:

November 12th 2021, 10:56 am

Vizio Makes 2x More Selling Ads and Data Than it Does on TVs


(Photo: Vizio)
Vizio may be known as a television company, but one look at its books will tell you it’s raking in cash for far more than flatscreens. In its last earnings report, the newly publicly-traded company revealed that its advertising and viewer data segment is making more than twice as much money as the Vizio segment that sells actual devices.

Since going public earlier this year, Vizio has operated its business in two distinct segments: Platform Plus and Devices. Platform Plus encompasses Vizio’s advertising and user data business, while Devices includes the sale of Vizio’s hardware. As it turns out, Platform Plus is responsible for more of Vizio’s revenue than its Devices segment; Platform Plus pulled $57.3 million in gross profit over the last three months, while Devices made $25.6 million. 

This is because Vizio uses its TVs to gain access to its true moneymakers: advertising and viewer data. Vizio sells many of its TVs at cost (or close to it), which may at first sound like poor practice, but in reality it allows Vizio to begin tracking whatever the new owners of those TVs watch. Vizio can then sell that data, likely to whomever would be interested in which entertainment options garner the most viewership. It’s why smart TVs are somehow more affordable than “dumb” TVs—smart TVs may come loaded with far more capabilities, but they also provide manufacturers with avenues to track users’ viewing data, and sometimes their location as well. In a way, the end users of Vizio’s TVs are Vizio’s real product. 

Banner ads, like this one for The Bad Batch, are just one way Vizio makes money through advertising. (Photo: Vizio)

The southern California-based company also earns a significant portion of its keep by advertising shows, movies, and streaming services to users. If you own a smart TV, take a look at your remote: if it has a Netflix button, for example, that button is the product of negotiations between the TV manufacturer and Netflix to the tune of cold hard cash. Homescreen ads and banner ads (like the kind you see while browsing for something to watch) are also prime opportunities for Vizio to make money off of streaming services. 

Vizio isn’t the only hardware company that not-so-secretly generates a significant portion of its profits through viewer data. Roku also tracks viewer data in order to more effectively negotiate advertising deals that earn the hardware company a pretty penny. But while Vizio’s earnings report boasts that it’s now the #2 TV brand and #1 sound bar brand in the United States, don’t be fooled—the company might as well be considered a advertising firm. 

Now Read:

November 12th 2021, 10:56 am

Windows 11 SE Will Only Be Available on New PCs, Can’t Be Reinstalled if Removed


(Photo: Microsoft)
Now that Windows 11 SE has broken cover, questions abound for tinkers who are wondering what the possibilities are for installing it on older laptops, upgrading it, etc. Microsoft has provided some answers in a lengthy Q&A PDF document, and the TLDR version is, “no, you can’t.”

According to Microsoft, you can only acquire Windows 11 SE by purchasing a new device, and that’s it. If you’ve got an older laptop that’s struggling under the burden of Windows 10, you are out of luck if you think the streamlined Windows 11 SE will be your savior, because it ain’t happening. The reasoning is actually quite simple, per the document, “Windows 11 SE is not intended for personal use. We expect that most customers would find Windows SE too restrictive on a personal device.”

On a similar note, if you were considering buying the $250 Surface Laptop SE and then installing Windows 11 on it (it comes with Windows 11 SE pre-installed), you can actually do that, or go with any licensed version of Windows, but you might never be able to reinstall 11 SE. Again, the document states, “…if you’d like to use a Windows 11 SE device for personal use, you may purchase a license for the version of Windows you’d like, completely erase all the data, files, settings, favorites, and the Windows 11 SE operating system on your device, and install your licensed version of Windows. After that, there would be no way to get back to Windows 11 SE.” This doesn’t necessarily rule out reinstalling 11 SE via recovery partition, however.

The Start menu in Windows 11 SE is quite streamlined (photo: Microsoft)

All that said, it would probably be wise to avoid installing Windows 11 SE on your personal device. It is an OS designed to be administered by an IT department, not a local user. That means you can’t install third-party native apps, as you can only run “web apps” in Edge or Chrome. To put a finer point on it, “App installation, advanced features, and setting changes are controlled by IT administrators rather than end users for this specialized version of Windows 11.”

None of this is really surprising. Most end users wouldn’t want a super stripped-down OS on their machine with a bunch of permission headaches included, and it’s not like you can just go buy a copy of Chrome OS for your device either, so Microsoft isn’t alone here. Not to mention the fact that you might not even be able to buy a device with Windows 11 SE installed, as it sounds like they’ll only be sold to schools and similar organizations. Microsoft notes in its Q&A that you “might” be able to buy one as an individual, which doesn’t sound too promising.

Now Read:

November 12th 2021, 9:40 am

Crypto Miners Driving High Demand for AMD CPUs with Big L3 Cache


Now that crypto miners and their scalping ilk have succeeded in taking all of our precious GPU stock, it appears they’re now setting their sights on one more thing gamers cherish: the AMD CPU supply. According to a report in the UK’s Bitcoin Press, part of the reason it’s so hard to find a current-gen AMD CPU for sale anywhere is because of a crypto currency named Raptoreum that uses the CPU to mine instead of an ASIC or a GPU. Apparently, its mining is sped up significantly by the large L3 cache embedded in CPUs such as AMD Ryzen, Epyc, and Threadripper.

Raptoreum was designed as an anti-ASIC currency, as they wanted to keep the more expensive hardware solutions off their blockchain since they believed it lowered profits for everyone. To accomplish this they chose the Ghostrider mining algorithm, which is a combination of Cryptonite and x16r algorithms, and thew in some unique code to make it heavily randomized, thus its preference for L3 cache.

In case you weren’t aware, AMD’s high-end CPUs have more cache than their competitors from Intel, making them a hot item for miners of this specific currency. For example, a chip like the Threadripper 3990X has a chonky 256MB of L3 cache, but since that’s a $5,000 CPU, miners are settling for the still-beefy Ryzen chips. A CPU like the Ryzen 5900X has a generous 64MB of L3 cache compared to just 30MB on Intel’s Alder Lake CPUs, and just 16MB on Intel’s 11th-gen chips. Several models of AMD CPUs have this much cache too, not just the flagship silicon, including the previous-gen Ryen 9 3900X CPU. The really affordable models, such as the 5800X, have just 32MB of L3 cache, however.

According to the Raptoreum Mining Profitability Calculator, just one AMD 5950X can generate 181 Raptoreum a day, which at its current price is around $4. This means you could theoretically pay off the cost of the CPU in roughly six months, electricity costs aside. AMD CPUs are also favorable in this situation not just because of their L3 cache but also because of their excellent performance-per-watt, as they have been more efficient than Intel’s chips for some time now, though of course the company hopes to change that with Alder Lake. Still, Bitcoin Press notes, “Rough power consumption estimates for Ryzen 9 3900X system is rated at 190W with OC settings,” which is likely better than any CPU currently on the market.

Making matters potentially even worse, AMD’s next update for its Ryzen CPUs will include adding V-cache, which is vertically stacked L3 cache on-die, with 64MB per chiplet.  These L3 upgrades will certainly makes its chips even more desirable for Raptoreum miners, assuming the currency  still exists when the updates arrive in early 2022.

Now Read:

November 12th 2021, 9:40 am

Mission Update After Ingenuity’s Flight 15: Craters, Swashes, and Servo Wiggle


Ingenuity’s mission on Mars has been a resounding success, but it is not without its emergent challenges; reports have returned from the space copter’s latest exploits, and things are getting tough out there. This fall, we experienced solar conjunction with Mars, where the two planets line up on exactly opposite sides of the sun. It’s a cool astronomical event, but it also forces a communications blackout that takes weeks to clear up, because the Sun’s radio emissions can corrupt signals we try to send between here and there. Ingenuity has also been dealing with declining atmospheric pressure for some weeks, as the Martian seasons roll ahead. And now, as if it wasn’t tough enough piloting a space helicopter on Mars, a mystery wobble is forcing NASA to decide whether to patch Ingenuity’s flight software.

Just after flight 13, telemetry reported a mechanical wobble in the swash plate atop Ingenuity’s rotor shaft. Eventually, it was traced to a minute oscillation that appeared to come from two of the six flight control servos. Thanks to the infuriating nature of intermittent problems, it showed up once and then vanished, and so far the problem isn’t easily repeatable. Mission scientists have tested the helicopter with a series of “servo wiggles,” but have been unable to reproduce the same deviation.

The upper swashplate of NASA’s Ingenuity Mars Helicopter controls the pitch of the upper rotor blades as they rotate, and is critical to stable, controlled flight. (Image: NASA/JPL-Caltech)

After communication with Ingenuity was restored, the scientists fired up the space copter for flight 14, a 23-second test hop at 2700RPM. Again, no wobble. Never one to pass up a chance to do a little science, Ingenuity “opportunistically” snapped a bunch of images of scientific interest during that hop, including a burst of images taken at seven frames per second. The raw images are still being processed, but like the rest of the frames captured by Ingenuity on its flights, as soon as they’re posted they are available to the public via Ingenuity’s mission portal.

Then, the rotorcraft started the home stretch of its months-long excursion. After more than a dozen successful outbound flights, the helicopter turned and began to make its way toward home base with a two-minute flight, the first of perhaps half a dozen that Ingenuity will make before it reaches Perseverance. This most recent flight is Ingenuity’s second after the solar conjunction, and its fifteenth overall. NASA announced its success via Twitter, and while they haven’t put out official stats from the flight as of this writing, here are Ingenuity’s most current performance numbers:

Source: NASA/JPL

These high-RPM test flights are critical because they allow mission control to test its models for flight in an extremely low-pressure environment. Mars’ northern hemisphere is in the middle of summer, which means that its already delicate surface pressure drops by a third, settling at a wispy one percent of the sea-level pressure here on Earth. With fewer air molecules to push around, Ingenuity has to spin its rotor up to higher and higher speeds in order to stay aloft. But these more demanding flights also generate data on “critical high-RPM motor performance, which the team will use to design and tailor upcoming low-density flights in the months ahead,” said Teddy Tzanetos, Ingenuity Team Lead at NASA’s JPL.

The space copter’s current mission is to rendezvous with Perseverance, scouting its path for a northward sortie along the eastern edge of the Seitah region. After rounding the curve of the Seitah depression, the pair will tack northeast past the Octavia E. Butler landing site. From there, they will skirt around the rim of another crater, before turning west again to head for Three Forks, an ancient river delta that opens up at the northwestern edge of Seitah.

This annotated image of Mars’ Jezero Crater depicts the route Perseverance will take on its way to Three Forks. Image provided by the HiRISE instrument aboard NASA’s Mars Reconnaissance orbiter. (Image: NASA/JPL/Caltech/University of Arizona.)

The delta is thought to be full of sediment and rocks, laid down in primordial floods that carved out a riverbed and filled up Jezero with water and turned it into a crater lake, with a flat bottom covered in ripples of silt and sand. Perseverance has taken some spectroscopic readings that back up the hypothesis; like craters in Arabia Terra just to the east, Jezero’s sediment is loaded with clay, which forms in the presence of water. It confirms that Jezero held a lake. (Jezero actually means “lake” in many Slavic languages.) But our imaging resolution hasn’t been good enough to test these fine assumptions, so the only way to know for sure was to go there and check. “Confirming that there was a lake in Jezero Crater is the first major science result of the mission,” explained two Perseverance team members, Melissa Rice and Briony Horgan, in an essay. “In the coming year, Perseverance will drive up to the top of the delta, studying the rock layers in microscopic detail along the way and collecting many samples. When those samples eventually make their way to Earth, we will learn if they contain signs of microbial life that may once have thrived in this ancient lake on Mars.”

Now Read:

November 11th 2021, 7:29 pm

Valve Says Steam Deck Will No Longer Ship in Time for the Holidays


It’s a sad day for anyone who was planning to while away the holidays with a shiny new Steam Deck. Valve has announced that its upcoming handheld gaming machine won’t launch on time. Instead of reaching gamers in time for the holidays, the launch has been pushed back to February 2022. You can thank the messy global supply chain for the delay. 

The Steam Deck looks a bit like a Nintendo Switch, but it’s a beefier piece of hardware that can play PC games that run on the Linux-based Steam OS. Valve opened pre-orders a few months back, saying that the first wave of orders would ship in December. However, the projected shipping date began slipping for those who didn’t get in the queue at the earliest possible moment. That should have been a warning sign that the timeline was iffy, to say the least. 

A device like the Steam Deck requires numerous components from different suppliers, and many of them, like chips, are severely constrained. Valve has apologized for the delay, saying, “We’re sorry about this—we did our best to work around the global supply chain issues, but due to material shortages, components aren’t reaching our manufacturing facilities in time for us to meet our initial launch dates.” 

As of now, the projected shipping date for the Steam Deck is February 2022. That’s just for the earliest pre-orders, though. Everyone keeps their place in the queue, so it might be a few more months before the orders that came in later are ready. If you didn’t pre-order one but still want a Steam Deck, get ready for some waiting. Valve doesn’t expect to have any additional units to sell until later in 2022. Previously, it was saying Q2, but that timeline may have been affected by the latest delays. 

This is not the first delay for the Steam Deck. Over the summer, Valve had to adjust the timeline for the base model console, which sports just 64GB of eMMC storage. The more expensive NVMe models with 256 and 512GB of storage were not affected at the time. It’s unclear if the different models will have different fulfillment schedules now that shipping won’t even start until February. On the plus side, this could give developers more time to prepare for the Steam Deck. Valve will have a verification program that will tell you which games work best on the handheld. Valve says most games it has tested have no trouble maintaining 30FPS on the Steam Deck.

Now Read:

November 11th 2021, 7:29 pm

Apple Backs Off Punishing Those Who Repair Their Own iPhone Screens


(Photo: Kilian Seiler/Unsplash)
In vague terms, Apple has stated it plans to finally allow customers to repair their iPhone displays through third parties—without losing the ability to use FaceID.

As originally discovered by iFixIt, new iPhone displays contain a tiny controller chip, which acts as an integrated circuit responsible for translating screen taps into signals the phone can understand. This is a change, as prior to the iPhone X, this chip was stored in the logic board within the body of the device. As it stands today, a serial lock requires that the display chip remain connected to the rest of the phone in order to retain use of FaceID, which is Apple’s facial recognition feature. 

The display chip’s placement has historically thrown a wrench into any attempts to replace iPhone screens through third parties. The chip has to be transferred to the new display, which heavily complicates repairs. While Apple and its authorized repair partners possess a special software tool that helps to maintain pairing between the device and its display during and following repairs, third party shops are left in the dust. iPhone users are therefore forced to choose between paying the steeper repair costs at Apple or losing FaceID, Apple’s only non-passcode unlock method since getting rid of TouchID with the iPhone X redesign. 

Apple got rid of TouchID with the iPhone X, leaving users with passcode entry and FaceID. (Photo: Miguel Tomas/Unsplash)

Apple now says it will eliminate this issue in a future iOS update, giving iPhone users more choices when they need to get their display repaired. While Apple hasn’t yet stated which update will deliver the fix, there will eventually come a point when iPhone users can stress a little less over breaking their screens. “A solution will be available in an upcoming software update,” said an Apple spokesperson to The Register. How quickly this occurs depends on how soon Apple is willing to give up the revenue it makes from screen replacements (which, by the way, are the most common type of iPhone repair).

In the meantime, iFixIt is celebrating the victory—with a healthy dose of skepticism. “Apple — and the many companies it inspires — will advance again with more part lock-downs, more feature reductions, more reasons why only their profitable repair centers can do this work,” says iFixIt writer Kevin Purdy. “This is the kind of thing that keeps us fighting for the right to repair, in the press and everywhere else we can.”

Now Read:

November 11th 2021, 10:25 am

A USB-C Hub with a Display? Why it Might Be the Perfect Thing


Until recently, I’ve thought of a USB hub as a pretty simple piece of gear. Sure, some are powered and some aren’t, and of course there are USB 3.0 versions, with some USB 2.0 versions still floating around, but for the most part they’ve traditionally been pretty boring. That has all changed now with the introduction of USB-C hubs that can handle high-resolution monitors along with high-speed peripherals and even power distribution to your gadgets.

Introducing the DockCase USB-C Smart Hub

Until recently, finding out whether a hub was compatible with your gear has been a simple but not very scientific process of plugging it and seeing what happened. DockCase has started to change all that, with a programmable “smart” hub that provides information about its status on an LCD and supports multiple configuration options. Okay, when I first heard about it I thought it was a bit over the top, but that was before I went through two USB-C hubs that I purchased without being able to get them to work reliably with my Dell laptop and the devices I connect every day when I’m teaching: Ethernet, a 4K Dell display, a mouse and keyboard dongle, a USB headset, and sometimes an external webcam.

None of these devices are exotic or unusual, and all work perfectly when connected to my Precision 5540 laptop directly. But unfortunately, the 5540 only has one USB-C port, two USB-A ports, and no Ethernet jack, so I really need a dock. Plus, unplugging all those cords multiple times a day is definitely a hassle. With the DockCase Smart Hub, expected to retail for $150, you’re paying for all this functionality, but if you rely on a laptop and dock regularly, that’s a small price to pay for consistent performance.

The DockCase Smart Hub handled all the various dongles I tested it with perfectly, which was definitely not true of some of the other hubs I’ve used

The LCD is Surprisingly Useful

One issue with most docks is that you can’t really tell what they are doing, and why they might be unable to work with a certain piece of gear. It could be a bandwidth issue in the case of a monitor, or a power limitation, or some other incompatibility. The Smart Hub takes the mystery out of that by providing a lot of useful status information on its LCD. You can see how much power it’s drawing, along with where the power is going. You can also see the bandwidth it’s able to achieve on each of its ports, and often the type of device connected. For example, I had a hard time getting my Dell 4K monitor to work with two other hubs. It wasn’t instant with the Smart Hub either, but the LCD helpfully guided me through some options, and I figured out that I could keep it stable easily at 4K/30fps by tweaking a couple of settings. 4K/60fps was a little more challenging, but ultimately also worked after I changed the HDMI setting. Also, keep in mind that the dock is running beta firmware, and doesn’t launch until January 2022, so I assume there will continue to be improvements.

With my previous docks, when an adapter didn’t work, I was pretty much out of luck. Aside from trawling the web, it was basically a matter of trying a different one. In the case of debugging my 4K monitor, it showed exactly what was happening on my HDMI 2.0 cable. So it let me know I was having stability issues at 4K/60fps, and suggested that I lower the frame rate by changing the dock’s HDMI setting to “Balanced” instead of “Extreme”. The only thing that didn’t work is that I couldn’t use my 4K Dell monitor when it was plugged directly into the dock’s USB-C port. But it was easy to use an HDMI cable instead.

Configuring a DockCase Smart Hub

The hub’s parameters are controlled using a single button. Short presses advance through options like a Tab key, while long presses choose an option. As you can imagine, this isn’t exactly a UI you’d want to use all the time, but once you get the hang of it, it’s simple enough to make periodic configuration changes. You can set the amount of power the hub will consume, parameters for the embedded fan, along with HDMI and USB settings. There are some preset combinations for prioritizing data, video, or charging, but you can also customize your own. Some setting changes only take effect after you unplug and re-plug the hub from its power source, but that’s about the extent of what you need to know.

Another interesting feature of the dock is power management. It accepts up to 100 watts over one of its USB-C ports, and can be set to either funnel a measly five watts to connected peripherals, leaving 95 watts for charging the attached computer, or it can steer up to 25 watts for peripherals like SSDs that might require more power. Alternatively, if your computer is plugged in and can provide power over USB-C, the dock can use that instead.

One change from previous models that not everyone will like is the card reader has been replaced with an additional USB-C port. Obviously that provides more flexibility since USB-C can connect to a variety of devices, but for those who are missing a card reader on their laptop, it adds one more dongle to your list. On the upside the company has added an essentially-silent, small fan to address some issues with heat in the previous model.

Price and Availability

DockCase expects the product to be priced at $150 when it ships in January. In the meantime you can learn more, or buy-in to one of their offers for backers on the project’s Kickstarter site.

Now Read:

November 11th 2021, 10:25 am

The Success of the Xbox Series X and PlayStation 5 Makes Them More PC-Like Than Ever


The Xbox Series X and PlayStation 5 both launched roughly a year ago. Both have enjoyed success; reports from late October, if accurate, claimed eight million sales for Microsoft’s Series S|X family and 13.4 million for both flavors of the Sony PlayStation 5.

The fact that the newest consoles from Microsoft and Sony have sold well is not unusual in and of itself. The fact that they have sold as well as they have despite a near-complete lack of what we would nominally call launch titles, however, is a little more interesting. There are no truly “next-generation” exclusives (as the term is typically defined) available for the Xbox Series S|X yet, and only two for the PlayStation 5: Ratchet and Clank: Rift Apart and Returnal. This implies that one of the handful of remaining distinctions between PCs and consoles is becoming obsolete.

PCs and consoles have been growing more alike for at least the past 25 years, but one of the few remaining technical differences between the two platforms has been the concept of a console generation. It’s not accurate to say the PC gaming industry never suffered from an analogous problem — gamers who lived through the plethora of PC hardware standards available in the 1980s and early 1990s, or who remember the floppy / CD transition will remember what I mean — but the advent of WinTel and standardized 3D graphics APIs largely solved it. PC gaming performance has moved smoothly upwards over time as new hardware launches, not increased every 4-7 years with the availability of a new hardware configuration.

The PC industry emphasized backwards compatibility because business customers and consumers demanded it in this space. DirectX compatibility levels and support for specific versions of Windows have served as weak gate keepers, but not in the same way as console platform compatibility.

Up until 2020, each console generation has debuted with a suite of next-generation launch titles ostensibly tuned to show off the features and capabilities of the new hardware. Some platforms have launched with stronger lineups than others, but even consoles that highlighted their backward-compatible capabilities like the PlayStation 3 and Wii U emphasized new gaming experiences first and foremost. The PlayStation 5 and Xbox Series X were initially marketed on their ray tracing capabilities and next-generation storage performance, but the COVID-19 pandemic has delayed software developers around the world, leaving both platforms almost entirely reliant on the experience they offer in what would normally be called “last-generation” games as opposed to the cutting edge visuals and frame rates available in new titles.

Surprisingly, this has not dented sales for either platform. By all accounts, Sony and Microsoft are selling every console they can build. This suggests that gamers no longer necessarily expect hardware and software refresh cycles to move in sync with each other. By decoupling them, the console market has become much more like the PC space it is typically contrasted with.

The PC Model Wins Again

The console industry took several significant steps towards what I’m loosely calling the “PC model” during the Xbox One / PlayStation 4 cycle. First and most obviously, Sony and Microsoft adopted the x86 CPU architecture. Second, both companies launched mid-generation hardware updates with upgraded CPU and GPU capabilities that retained backwards compatibility in a more PC-like fashion. Third, digital distribution became more popular.

The pandemic makes this a special case, to some extent, but I don’t think it completely explains the phenomena. PC gamers don’t just upgrade for better performance in cutting edge titles they intend to play in the future. Performance in old favorites matters, too. One of the best parts of a GPU swap is testing performance in titles and settings that used to bring your system to its knees. Both Sony and Microsoft gamers are taking advantage of a feature that the PC world has long enjoyed — and, in the process, proving that consoles don’t necessarily need an enormous slate of launch titles to be successful.

I don’t think console games are going to abandon the concept of launch titles, if only because a launch event gives both console manufacturers an unparalleled opportunity to show off what a new platform is capable of at a time when player interest is at its highest. But it’s another example of how PC and console ecosystems have become more similar over time.

There will always be some degree of difference between PCs and consoles in terms of preferred input methods and the particular features of their various ecosystems. PCs will always be useful for a wider range of activities than consoles are, and consoles are more likely to serve as living room entertainment systems. But the differences between the two respective gaming markets continue to narrow with every passing generation.

Consoles have influenced PC gaming, too — low-level APIs like DirectX 12 were a console feature first — but on the whole, I’d argue that consoles have become more like PCs than PCs have become like consoles. It’s no longer accurate to say that consoles are just for playing games. While this remains their primary function, and PCs remain far more flexible overall, modern consoles are capable of surfing the internet and streaming content from many services too.

Backwards Compatibility is Here to Stay

Backwards compatibility has never been as robust on consoles as it has on PCs, but it’s impossible to imagine the capability going away after the success of the Xbox Series X and PlayStation 5. Backwards compatibility is what allowed both Sony and Microsoft to launch their hardware in 2020 in the first place.

Console buyers are now accustomed to mid-generation upgrade cycles and to the idea that hardware and software launches aren’t necessarily connected to each other. The concept of a “console generation” is now fuzzier than ever, even before we talk about projects like Xbox Cloud Gaming.

Consoles are never going to be subsumed into the PC gaming sphere. Nintendo, Sony, and Microsoft have built enormous businesses and brands around their respective hardware and the experience they sell is distinct from that of gaming on a PC. But while Microsoft and Sony may call what they sell a “console,” that’s a marketing term, not a technical one.

Architecturally, what both companies sell is a PC running custom software on a customized SoC. The success of the Xbox Series S|X and the PlayStation 5, absent much in the way of launch titles, is proof that a PC-like software / hardware launch cycle can work on the console side of the business as well. That’s not enough reason for Sony or Microsoft to kill the idea of launch titles outright, but it shows how the console business continues to evolve to resemble its PC counterpart.

Now Read:

November 11th 2021, 10:25 am

Metallurgist Who Tested Steel for US Navy Subs Admits Faking Results


(Photo: Darren Halstead/Unsplash)
A metallurgist once responsible for testing the steel used on US Navy submarines has pled guilty to falsely representing the steel’s integrity for more than three decades. 

Elaine Marie Thomas, a 67-year-old former director of metallurgy, worked from 1985 to 2017 at a Washington foundry called Bradken that supplied steel submarine hull castings to the US Navy. During that time, Thomas was responsible for testing steel castings to verify their strength and toughness. Testing was conducted at extremely cold temperatures (-100F, -70C), which helped to ensure that the material wouldn’t fail in the event of a collision or other dangerous circumstances. Thomas found it “stupid” that the tests needed to be conducted under such extreme conditions and falsely recorded positive test results to make the castings appear more durable than they might have been. She faked steel test results in at least 240 cases—a “substantial percentage of the castings Bradken produced for the Navy,” according to prosecutors. 

Despite the foundry changing ownership in 2008, evidence of Thomas’s fraudulent habits didn’t emerge until 2017, when a lab employee found that Thomas had altered steel test cards. The United States Attorney’s Office filed a criminal complaint charging Thomas with Major Fraud Against the US. During the Navy’s initial investigation of Thomas’s wrongdoings, it also found that Bradken had invoiced the Navy as if the castings sold had met the rigorous standards they never actually met. Bradken has since been required to enter into a compliance agreement with the Navy, in which Bradken has been ordered to create a compliance and risk committee as well as hire people who specifically oversee lab testing and tracking. 

(Photo: Bradken)

In 2020, Thomas entered into a deferred prosecution agreement that required her to admit responsibility for faking the test results and comply with certain remedial measures, including the payment of $10.8 million in fines. The agreement, should Thomas fulfill it completely, will allow the government to dismiss all criminal charges after three years. 

Fortunately, no submarine hulls under Thomas’s watch have failed. The Navy, to great financial cost, has taken steps to ensure that affected submarines are operated safely since the discovery of the fraudulent test results.

Thomas is scheduled to be sentenced in February 2022. She faces up to 10 years in prison, on top of a maximum $1 million fine. 

Now Read:

November 10th 2021, 2:09 pm

Facebook Shuttered Team Researching Social Media Addiction


(Photo: Joshua Hoehne/Unsplash)
Another week, another fresh embarrassment for Facebook. In what appears at first to be good news, new whistleblower documents reveal that Facebook had a team responsible for reporting on and solving unhealthy social media use. The team, which was focused solely on user well-being, investigated the ways in which people used Facebook, and was tasked with finding ways to improve upon the platform.

But “had” is the keyword, here. According to internal documents inspected by the Wall Street Journal, Facebook shut down the team in 2019.

Worse, the team appears to have been on the cusp of making major suggestions that could have benefited users—at the cost of time spent on the platform, which is crucial to Facebook’s ability to earn revenue. Before they were disbanded, the researchers on the team conducted a survey of 20,000 users and found that one in eight engaged in “problematic use” of Facebook. Problematic use, they said, produced a variety of negative effects on key aspects of users’ lives. Some users reported loss of productivity, while others said their sleep was impacted by late-night scrolling and viewing disturbing content. Users also reported deterioration within their interpersonal relationships; parents even avoided their children in favor of spending more time online. One user missed a family member’s wedding because they were watching a video on Facebook. Another said it was common for them to browse the app until 2 AM, making it difficult to wake up feeling rested the next morning.

(Photo: Brett Jordan/Unsplash)

Anyone familiar with the scientific method will tell you correlation isn’t causation—as will Facebook itself, whose parent company, Meta, denies WSJ‘s interpretation of its research. But Facebook’s correlation with unhealthy social media use isn’t exactly hopeful. It also isn’t comforting in the face of Facebook’s recent rebrand, during which Zuckerburg has publicly aimed at blurring the lines between the “real” world and the virtual world by building out a metaverse. 

Facebook’s internal documents reveal that the company knew its platform was more frequently associated with addictive use than other virtual experiences, including Reddit, YouTube, and World of Warcraft. Whistleblower Frances Haugen spoke just last month about how Facebook is designed to reward controversial (and sometimes, downright hateful) content due to the way its algorithms favor engagement above all else. One of Facebook’s subsidiaries, Instagram, was also found this year to have a uniquely poor impact on its users thanks to its algorithms and user interface. 

Despite this, the company has only made half-hearted attempts at improving its platform to address these issues. It added a time-management tool to its mobile app in 2018, as well as a “quiet mode” that muted push notifications in 2020. But the latter feature was hidden among the app’s settings, and Facebook’s algorithms still push unsavory content to the top of users’ news feeds. Facebook recently squashed an outside attempt at helping people curb overuse of the app, so it’s unlikely that we’ll see any real strides toward user well-being in the near future. 

Now Read:

November 10th 2021, 2:09 pm

OWC Announces PCIe SSD Capable of 26GB/s Data Transfer and 64TB Capacity


If your rinky dink NVME M.2 PCIe SSD isn’t cutting the mustard with its laughable 7GB/s transfer rates and middling capacity, OWC has a solution that is sure to clear up any storage and capacity bottlenecks you might be dealing with. Its new ACCELSIOR 8M2 PCIe SSD “solution” holds up to eight M.2 SSDs, and when connected to a next-gen platform with PCIe Gen 4 abilities, can rip through any workload with a staggering 26GB/s throughput. If you’re still on a platform with PCIe Gen 3, it’s capable of a still-decent 12GB/s, which is still faster than any single SSD you can buy today.

Of course, the ACCELSIOR isn’t exactly a single SSD itself, but rather an add-in card that squeezes into a PCIe x16 slot and holds up to eight individual drives, which are then joined together using software RAID. Dubbed SoftRAID XT, OWC says the software allows you to create multiple types of RAID arrays, including 0/1/4/5/1+0 (10) volumes, and no drivers are needed. Only M.2 SSDs with PCI Gen 3 or Gen 4 specs are compatible, so for maximum speed you would also need an AMD-based Zen 2 or 3 platform, or Intel’s brand-new Alder Lake.

The Accelsior 8M2 is so hardcore it has a 6-pin PCIe power connector. (photo: OWC)

As far as the benefits of this type of storage array go, OWC claims it allows the following:

We’re not exactly content creation professionals, but that sounds like a lot of data pumping through the system, which makes us a tiny bit giddy. There is one big caveat though, as OWC notes in the fine print:  “Up to 26,926MB/s sequential read/write (max) performance based on testing a 16TB (8 x 2.0TB) OWC Aura Pro series SSD equipped Accelsior 8M2 installed in a Windows 10 PC equipped with a Gigabyte Technology x570 motherboard with an AMD Ryzen 9 3900 3.8GHz processor and 16GB RAM, running Crystal Disk Mark 7.0.0 (sequential 1Mbyte block size, 16 queues, 6 threads). Performance will vary depending on SSDs used, CPU speed, RAID setting, and PCIe version.” In other words, your mileage may vary.

However, you can just buy the exact configuration OWC used to arrive at this crazy level of performance, for just $4,300. If you recently came into an inheritance or invested in Tesla stock last year, the 64TB version is a measley $13k. It makes us wish we could stripe two of them together for a total of 16 SSDs, with something like an SLI adapter between them. Maybe OWC is saving that for a future iteration.

Now Read:


November 10th 2021, 10:11 am

Microsoft Takes on Chromebooks With New $250 Surface Laptop SE


(Photo: Microsoft)
The long-rumored Chromebook competitor from Microsoft has finally arrived, and it’s called the Surface Laptop SE, running on a stripped-down version of Windows 11, also dubbed SE. The $250 laptop is aggressively priced and aimed directly at students, as well as at schools using Google’s popular Chromebooks.

The laptop has rather modest specs, which is not a surprise given its price tag.  At the heart of the all-plastic beast lies a Celeron processor, either in dual-core or quad-core trim, along with 4 or 8GB of DDR4 memory. Storage is handled by an embedded eMMC card, which is available in either 64GB or 128GB capacities. It offers an 11.6” TFT display with 1366 x 768 resolution, 80211.AC Wi-Fi, Bluetooth 5.0, a 720p webcam, and lone USB-A and USB-C ports for connectivity. Microsoft claims the 2.45lb. laptop offers up to 16 hours of battery life.

Microsoft’s Surface Laptop SE is all-plastic and weighs a very totable 2.45lbs. (photo: Microsoft)

Though the hardware is interesting, Windows 11 SE shares top-billing, as it’s a new version of Windows designed explicitly to be run by IT administrators in a classroom setting. This is not to be confused with Windows 11 S Mode, which is also a streamlined version of Windows 11. Ok, it is confusing, we admit it. That said, this new SE version comes pre-configured for student privacy and remote management, according to Microsoft, and is also a “cloud-first” OS, so it’s safe to assume many of the devices apps will be run from the Cloud, as opposed to being installed locally. Naturally it runs Microsoft’s Edge browser, but can use Chrome extensions, Google Classroom web Apps, and Office 365 web apps as well. This should make it easier for schools to ditch Google’s hardware for Microsoft’s, as the transition should be relatively trouble-free as far as compatibility goes.

Another interesting feature is Microsoft told Ars Technica that, “…vital components like the display, battery, keyboard—even the motherboard—can be easily repaired onsite, saving time and money for IT admins and schools.”

This is certainly a major play in a market dominated by Google and Apple, and as we reported earlier Chromebook sales have skyrocketed as schools transitioned to hybrid learning in the pandemic.  We noted at the time that the rise of Chromebooks is a big threat to Microsoft due to the age at which eager students become engrossed in the Google ecosystem. With the Surface Laptop SE, Microsoft is obviously hoping to turn the tide.

Now Read:

November 10th 2021, 10:11 am

Original Apple-1 PC Expected to Fetch a Huge Price at Auction


(Photo: John Moran Auctioneers)
If you thought Apple’s new M1 Pro and Max MacBook Pro laptops were expensive, they can’t hold a candle to the latest Apple computer that is up for sale. John Moran Auctioneers in Southern California is taking bids on a functioning Apple-1 computer, designed by Steve Wozniak and built by Steve Jobs himself. The ultra-rare PC is expected to sell for between $400k to $600k, if not more, as the current pre-bidding is already at $225k. The auction begins in earnest today, November 9th.

According to the auction listing, included in the package is an Apple-1 “NTI” motherboard with original blue Sprague 39D capacitors, original power regulators, rare original “Circle D” ceramic .01 capacitors, and an Apple Cassette Adapter (ACI) in an original ByteShop Apple-1 koa wood case with Datanetics Keyboard Rev D. The auction house states that there are only six known Koa Wood cases in existence, so everything about this PC is as rare as you can get.

More from the listing: Additional amenities include an Apple -1 connecting cable, and power supply, partnered with a 1986 Panasonic video monitor accompanied by a period Xerox copy of the Apple-1 Basic Manual, the Apple-1 Operations Guide, an original MOS 6502 programming manual, two Apple-1 software cassette tapes with period hand-written index card with memory locations for the Apple-1 loading software; further accompanied by three original video, power, and cassette interface cables.

The Chaffey College innards (photo: John Moran Auctioneers)

The PC itself is known as the Chaffey College model, as Jobs originally sold 50 units to a computer store named ByteShop in Mountain View. The shop owner who bought the units was upset to find they weren’t an all-in-one PC with an attached monitor, keyboard, and I/O ports, but instead all individual components that required assembly. Jobs changed the shop owner’s mind by informing him it was an easy upsell to just add a monitor, power supply, and keyboard to each purchase.

The shop ended up selling one of them to a professor at Chaffey College, who then sold it to a student in 1977, who has held onto it for 40 years. The “student” is the one putting it up for auction, though he or she is remaining anonymous.

Only 200 Apple-1 PCs were made, hand-built by Jobs himself in his garage along with his sister Patty and Daniel Kottke, under the guidance of Wozniak. Only 175 were ever sold, at a unique price of $666.66 due to Wozniak’s adoration for repeating numbers, according to the listing. The auction reports that the Chaffey College unit “has recently undergone an extensive authentication, restoration, and evaluation process by one of the foremost experts in the field.”  There is no indication the expert attempted any overclocking on the Apple-1 model, unfortunately.

Now Read:

November 9th 2021, 5:24 pm

AMD’s Zen 4 Roadmap Revealed: 96 Cores in 2022, 128 Core CPUs Arrive in 2023


AMD may have taken a breather on core counts in the past few years, but the company is poised to ramp things skyward again. AMD CEO Lisa Su shared the company’s updated server roadmap at its Accelerated Data Center event. In addition to new Milan CPUs with 768MB of L3 cache courtesy of AMD’s upcoming V-Cache, 2022 and 2023 will see new increases in core counts.

First up, Genoa, which arrives on 5nm with up to 96 cores, PCIe 5.0, and DDR5 support. While Genoa will use the standard Zen 4 core, a follow-up chip that scales up to 128 cores, Bergamo, will use a tweaked Zen 4C design and debut in 2023. Like Genoa, Bergamo will be built on the 5nm process node.

AMD is playing coy with the difference between Genoa and Bergamo beyond the latter’s increased core count. “AMD optimized the new ‘Zen 4c’ core for cloud-native computing, tuning the core design for density and increased power efficiency to enable higher core count processors with breakthrough performance per-socket.”

Chiplets are key to AMD’s ability to hit these core counts.

This could mean a lot of things. The fact that AMD is labeling the core as “4c” implies Bergamo is more than just Genoa with a few extra chiplets slapped on the packages, but the company isn’t sharing any details on how it’ll achieve these improvements or what technologies and/or design changes it will make to enable increased density in the same socket.

Earlier this month, ExtremeTech covered rumors that a future AMD Zen 5 CPU might offer as many as 256 cores in the future. A 128-core Zen 4c CPU makes this more plausible. With Bergamo not expected until 2023, a 256-core Zen 5 CPU wouldn’t arrive before 2024 – 2025 — and that’s a time frame that makes sense relative to where core counts are today.

Even AMD’s positioning of Zen 4c as a specialty product makes sense. A 128-core Zen 4c or future 256-core Zen 5 is a CPU that’s very interesting to a specific group of customers. It’s not just a matter of the consumer market — there are plenty of data center workloads where power budgets are better allocated to storage, GPUs, or networking as opposed to centralizing as much CPU performance as humanly possible.

Other major announcements by AMD at the same event include Milan-X’s drop-in compatibility with Milan. V-Cache servers will launch in Q1 2022 from Cisco, Dell, Lenovo, HPE, and Supermicro, and AMD has signed a new deal to provide server chips for Meta. The company has continued to win new business and advance its position in the market overall, with excellent results throughout 2021.

Now Read:

November 9th 2021, 2:10 pm

Walmart is Using Self-Driving Trucks in a 7 Mile Delivery Loop


(Photo: Gatik)
In another big step toward building an autonomous shipping industry, Walmart has begun testing self-driving delivery trucks. The trucks loop around a fixed seven-mile route in Bentonville, Arkansas, where they deliver goods between a Walmart dark store and a Neighborhood Market.

The trucks themselves are multi-temperature box vehicles operated by Gatik, a short-haul logistics company focused on autonomous delivery. Their Bentonville route offers plenty of opportunity for the trucks to prove their versatility in a range of necessary applications, including intersections, traffic signals, and merging onto densely-packed roads. In completing the loop several times per day, the trucks are able to expedite order fulfillment for Walmart customers. 

“Through our work with Gatik, we’ve identified that autonomous box trucks offer an efficient, safe and sustainable solution for transporting goods on repeatable routes between our stores,” said Tom Ward, senior VP of last mile at Walmart US, in the companies’ press release.

Walmart first announced its plan to start utilizing driverless vehicles last year. The retail giant had just finished an 18-month stint partnering with Gatik to experiment with staffed self-driving vehicles, and those involved in the project felt they were about ready to put the trucks out on real deliveries. Despite Gatik’s 100-percent safety record, this was a big leap; while a safety driver had been on every test trip, Walmart’s endgame meant removing said driver from the middle mile (AKA transportation between fulfillment warehouses and retail stores) of a commercial delivery route for the first time in the world. 

In order to empty the cab, Walmart had to obtain approval from the Arkansas State Highway Commission, who provided the green light in December of last year. Anticipating local apprehension regarding autonomous vehicles, Gatik proactively reached out to state and municipal authorities as well as emergency services to create a stakeholder engagement plan. They also intend on holding informational workshops about self-driving technology to build and maintain local buy-in.

Walmart and Gatik hope to continue building out middle mile efficiencies as this part of the fulfillment process becomes more and more crucial. As the companies point out, urbanization has brought attention to shorter routes in recent years; routes under 100 miles have risen by 37 percent in the last ten years, and a majority of delivery routes are under 500 miles. Consumers also expect rapid delivery now more than ever, thanks largely to Amazon Prime and major retailers who offer same-day pick-up or drop-off. Regardless of whether someone is fully comfortable with self-driving delivery trucks on the road, they’re likely to find Walmart’s initiative worthwhile when they get their groceries while they’re still cold. 

Now Read:

November 9th 2021, 2:10 pm

AMD’s Massive Milan-X CPUs Coming in Q1 2022 With 768MB of L3 Cache, 64 Cores


The Milan-X CPUs and V-Cache equipped Zen 3 CPUs that AMD has been talking up since earlier this year are headed to market in both servers and the wider consumer space. AMD confirmed today that it would ship its Milan-X CPUs with up to 64MB of additional L3 cache per chiplet in Q1 2022. There are up to eight chiplets in a single Milan-X Epyc CPU, which means these new CPUs offer up to 512MB of additional L3 cache.

Milan-X would be impressive for its stonkin’ huge L3 alone, but the fact that the cache is mounted vertically makes the project more interesting. This kind of 3D chip stacking has been discussed around the metaphorical campfire for many years, but we’ve never seen a server chip commercialized with slices of cache mounted on top of the die.

According to AMD’s previous disclosures, the L3 doesn’t occupy space above any of Ryzen / Epyc’s hot spots, and cooling the additional L3 array isn’t a problem. It’s not clear which server customers will be interested in this kind of capability, but adding an additional 64MB of cache per chiplet could leave certain server SKUs with a lot of L3 per die. A 16-core Epyc CPU with eight chiplets would offer 48MB of L3 per core, for example. There aren’t many workloads that benefit from such lopsided configurations, but AMD did claim a 66 percent performance improvement when comparing a 16-core Milan against a 16-core Milan X in an RTL verification workload.

One new tidbit AMD revealed is that it is explicitly committing to keeping Milan-X socket compatible with Milan and existing SP3 deployments. This implies (but does not prove) that Milan and Milan-X will support the same TDP brackets.

AMD has stated that its L3 is built on a density-optimized version of TSMC’s N7 process, which should make it reasonably power efficient. There’s still going to be a power hit associated with turning all that cache on, and we don’t know if there are any other improvements in Milan-X to compensate. Our guess is that the new Milan-X chips will clock at least a little lower than standard Milan, to compensate for the power requirements of the new cache. Alternately, it’s possible that AMD has found enough low-level improvements to the core (or to TSMC’s 7nm node) to keep clocks largely the same while adding the additional L3. AMD is keeping clocks and exact SKU details under wraps for now.

The fact that AMD still expects to ramp Genoa in 2022 indicates that Milan-X is more of a swan song for the existing SP3 ecosystem than a massive new project. The V-Cache enabled Zen 3 CPUs that are expected to launch for Socket AM4 in Q1 may be similarly positioned.

Now Read:

November 9th 2021, 2:10 pm

NASA Still Working to Repair Hubble After Latest Error


Stop me if this sounds familiar; NASA’s Hubble Space Telescope is experiencing difficulties that caused it to fall back to safe mode last week. This is, of course, far from the first time Hubble has encountered a glitch. NASA just worked past a similar error earlier this year, and now it is again attempting to repair the aging observatory from the ground. 

Hubble reported an error on October 23rd due to a loss of data synchronization messages. Two days later, another batch of messages were lost, causing the instrument to enter safe mode. This is a safety feature intended to preserve data and prevent damage to the telescope while the team on Earth assesses the problem. Currently, the focus of the investigation is on the Control Unit, which is on the Science Instrument Command and Data Handling (SI C&DH) unit. That might sound familiar, too. The issue last time was the power control unit, which is a component on the SI C&DH. NASA’s solution was to swap to the backup SI C&DH, and it would appear that’s where the new issue has originated. 

The Control Unit’s job is to provide timing information the instruments need to correctly respond to data requests. Data could be lost due to a malfunctioning Control Unit. NASA engineers are exploring possible workarounds in the absence of additional spare components. One option is to make changes to the flight software that would check for lost messages and compensate without putting instruments in safe mode. 

This past week, NASA flipped on an instrument called the Near Infrared Camera and Multi Object Spectrometer (NICMOS), which has been off for more than a decade. Hubble no longer needed it after getting the upgraded Wide Field Camera 3 in its final 2009 servicing mission. With NICMOS online, NASA can generate science data to better understand the issue without risking any vital components. 

A Hubble SI C&DH unit prior to its installation in the telescope.

According to the latest update from NASA, the team has not spotted any lost sync messages since activating NICMOS. NASA will reassess after the weekend, but it will bring the Advanced Camera for Surveys (ACS) back online on Monday if no further sync messages disappear. They will monitor for disruptions and make software changes as needed. This sounds less serious than the error from last spring, but you never know. Anything could spell the end for this senior space telescope. 

Currently, Hubble is vital to our study of distant objects because of its sheer scale and location outside the atmosphere. It was only supposed to last a few years, but it’s now more than 30 years old. Even if this issue proves easy to fix, Hubble can’t keep chugging along indefinitely. Luckily, it will be succeeded by the James Webb Space Telescope, which is scheduled to launch later this year. Webb is much more capable than Hubble, at least on paper. It’s got a lot of live up to, though.

Now Read:

November 8th 2021, 5:56 pm

Facebook Considering Retail Stores as it Looks to Build its Metaverse


Facebooks' extermination of the Unfollow Everything extension isn't the first off-putting thing they've done as of late . . . or the second, or the third. (Photo: Greg Bulla/Unsplash)

(Photo: Greg Bulla/Unsplash)

By now we’ve all heard that Facebook is rebranding to Meta, with the hopes of building what it calls a Metaverse, where our avatars can interact with each other and hold meetings on top of mountains. To get there though, people are going to need VR and AR hardware, and there’s the rub; most people don’t own any VR gear. Facebook is hoping to change that, however, by launching a string of retail outlets to give people the chance to try it themselves.

According to the New York Times, which looked at some of the planning documents and spoke with people with knowledge about the plans, the company envisions a global chain of stores showcasing the company’s latest hardware, including its Quest VR headsets, Portal teleconferencing system, and possibly even sunglasses from Ray-Ban that can capture video and photos.

In its report, the New York Times states that names considered for the locations include Facebook Hub, Facebook Commons, Facebook Innovations, Facebook Reality Store and From Facebook, but it notes that Facebook Store is the most popular internally. Facebook Barn was apparently not in the offering.

The Times also writes that the stores are, “intended to spark emotions like “curiosity, closeness,” as well as a sense of feeling “welcomed” while experimenting with headsets in a “judgment free journey,” according to the documents. We have no idea what any of that means, but perhaps it will be more clear when the plans become a bit more solid.

The first Facebook Store could theoretically open in Burlingame, CA, which is where the VR spinoff of Facebook, dubbed Reality Labs, has office space. Although Facebook’s plans for a Metaverse have been met with widespread skepticism, pushing adoption of VR technology isn’t a bad thing by itself as it currently exists as a small and somewhat pricey gaming niche. It just remains to be seen whether or not the populace as a whole trusts a company like Facebook to be the architects behind it.

Another issue is that Facebook wouldn’t be the first tech giant to wade into the unpredictable waters of having physical locations. Although Apple has been wildly successful, Microsoft attempted to recreate that level of success with its own chain of Microsoft Stores in 2009, only to shut them all down in 2020. Google is also flirting with the concept, as it announced the opening of its one-and-only Google Store in NYC earlier this year.

A Meta spokesperson couldn’t confirm any of these plans to the NYT, stating only that its hardware is “in high demand.”

Now Read:

November 8th 2021, 5:56 pm

Intel Pays VIA $125M For Centaur Technology’s x86 Design Team


The Robert N. Boyce Building in Santa Clara, California, is the world headquarters for Intel Corporation. This photo is from Jan. 23, 2019. (Credit: Walden Kirsch/Intel Corporation)

Intel will reportedly pay VIA $125 million for some unspecified assets related to Centaur Technology. It’s been implied that the payment may be for the right to recruit Centaur Technology engineers to join Intel. Details are murky at best right now.

Intel and AMD dominate the headlines today and have for many years, but there was a third x86 player, once upon a time. While VIA and its subsidiary, Centaur, never held more than a single-digit share of the overall CPU market, there was a time when the company’s small, power-efficient cores commanded market share in the ultra-efficient, ultra-silent PC market.

The company’s closest recent brush with consumer relevance came in the late aughts, when its Via Nano platform briefly looked like it might be an Atom competitor. Ultimately, the Nano didn’t win the market share it needed to be successful in the long term, and Via generally faded out of the consumer market. The company has continued to make x86 chips and to iterate on its designs, but it pivoted to serving a mostly industrial and commercial client base.

Tiny boards and small form factors like mini-ITX and pico-ITX defined Via’s x86 product lines.

The odd thing about this announcement is its lack of specificity. As Anandtech notes, there are no particular details on what Intel has bought. The implication of the announcement is that Intel paid for the right to recruit the Centaur engineering team. That seems an odd thing to purchase, but Intel has no official comment on the topic and neither does Via.

One report from THG suggests that Intel has paid for the Austin engineering team but not its intellectual property or brands.

“Intel will recruit some of Centaur’s employees to join Intel with certain covenants from the Company for Intel’s recruitment,” a statement by Via Technologies with TWSE reads, according to Tom’s Hardware. “As consideration, Intel will pay Centaur $125 million. […] Closing of the transaction is contingent on certain conditions in the agreement to be satisfied. Payment will be made in full upon closing.”

Intel has been aggressively recruiting from around the industry and building up its teams as it tackles its own rejuvenation and plans to bring new facilities online. Recent launches like Alder Lake we’re already well in the pipeline before Pat Gelsinger came aboard earlier this year, but Intel’s aggressive capacity expansion plans are very much his work.

VIA will reportedly retain the right to build and manufacture future x86 CPUs, but the Centaur website is down as of this writing. If Via is still in the x86 business it evidently doesn’t feel it needs a consumer-facing portal any longer. It is not clear if this deal will have any impact on the company’s Zhaoxin joint development venture.

Now Read:



November 8th 2021, 9:55 am

Australia Aims to Launch Water-Hunting Lunar Rover in 2024


The moon is a big deal again. NASA is currently working toward a return to the lunar surface with the Artemis program and the (heavily delayed) Space Launch System rocket. Recently, the Australian Space Agency announced it would cooperate with NASA to send a rover to the moon in 2026, but a private Aussie rover could beat it there by two years. This robot, designed in cooperation with the University of Technology Sydney, will search for signs of water on the lunar surface, which could help support future exploration efforts. 

This will be a pint-sized rover with limited capabilities, but the designers have come up with some interesting gimmicks. The 22-pound (10 kilograms) rover will measure just 60x60x50cm, slightly larger than your average proverbial breadbox. It will ride to the lunar surface aboard the Hakuto lander. Both the rover and the lander are being designed by ispace, a Japanese aerospace company that gained notoriety during the Google Lunar XPrize contest. 

The rover will have a robotic arm, designed by Stardust Technologies (based in Canada) and Australia’s EXPLOR Space Technology. The arm will sport sensors and cameras, some of which will be able to collect “haptic” data. The idea is the mission will beam back this data, and people on Earth will be able to “touch” anything the robot has touched with a special sensor glove. The cameras will also capture high-resolution images to render in virtual reality. 

The Hakuto lander designed by ispace.

Currently, the team is testing different configurations for the arm. Once engineers have settled on a design, they will have to integrate it with the rover. Then, EXPLOR Space Technologies will test the fully assembled bot at a facility in Australia. This testing will replicate the conditions on the moon as much as possible, ensuring that the robot will be able to operate in harsh conditions while also maintaining contact with Earth. 

As we look to the moon and beyond, having accurate geological data will help make long-term missions more feasible. NASA and other space agencies are very interested in so-called “in situ resource utilization.” That simply means you harvest resources at the destination to build infrastructure, manufacture fuel, grow food, and so on. This can reduce the amount of mass we have to launch from Earth, which is phenomenally expensive. We can look forward to hearing more about the little Aussie rover in the coming months.

Now Read:

November 5th 2021, 3:25 pm

Verizon, AT&T Delay 5G C-Band Launch Over Aircraft Interference Concerns


A 5G millimeter wave cell site on a light pole in Minneapolis.

Verizon and AT&T were hoping to light up their new mid-band 5G networks in the coming weeks, but they might have to wait a bit longer. The C-band rollout was previously set for December 5th, but the Wall Street Journal now reports it will be January 5th, at the earliest. At issue is the potential interference with the safety systems of some aircraft. 

5G connectivity in the US is in a rough place right now, and the C-band spectrum is seen as a major milestone in making 5G live up to the hype. Verizon and AT&T spent billions of dollars on the licenses for this spectrum recently because they’ve been trying to piece together a functional 5G network with scraps. I’m sure the carriers would dispute that description, but their focus on millimeter wave 5G speaks louder than any protestation. These signals have high speeds, but the range is abysmal — it doesn’t even work indoors. 

Mid-band 5G (like the C-band) can handle more data throughput than lower LTE-like frequencies, but they still have respectable range. There just isn’t a lot of this to go around. In the absence of a proper mid-band spectrum, Verizon has resorted to using dynamic spectrum sharing (DSS) to run a 5G network in the same bands as 4G. The results are not amazing, but C-band is supposed to fix all that. 

The spectrum AT&T and Verizon want to use is in the 3.7-4GHz range, which is just part of this block of the C-band. The C-band used to be reserved for satellite TV signals, but satellite transmissions in this range required enormous 10-foot and larger dishes — you might remember seeing them pretty often in the 80s and 90s. They’ve mostly been replaced by Ku-band technology, which can use smaller 1-foot dishes. With modern technology, it’s possible for operators to compress their remaining C-band usage, opening up wide swaths of these optimal frequencies for 5G. And that’s what they’re doing; operators will be done with the lower C-band on December 5th. 

The aviation industry has been expressing concerns about C-band interference for some time, and this prompted the Federal Aviation Administration (FAA) to finally step in this week. It issued a “special information bulletin” that noted the potential for C-band interference with radio altimeters. These devices measure the height of the aircraft above the ground using radio waves between 4.2 and 4.4GHz, which is adjacent to the new 5G C-band. The FAA believes C-band transmissions could leak into the altimeter range, causing potential safety issues. 

Industry experts stress that there’s no evidence of issues like this in other regions where mid-band 5G is more developed. Still, this is a “better safe than sorry” situation. The carriers are working with regulators to understand the FAA’s concerns. It’s likely the rollout will go ahead in early 2022, but you never know. Meanwhile, T-Mobile is sitting pretty on the big pile of the spectrum it got from Sprint. The Band 41 (2.5GHz) block is generally faster than LTE and has better range than mmWave, so it’s already in a good place for 5G. T-Mobile also acquired a piece of C-band spectrum that won’t be clear until 2023. AT&T and Verizon will be pushing hard to get started with C-band in order to catch up to Tmo.

Now Read:

November 5th 2021, 12:53 pm

Robbers Make Off With a Truckload of RTX 30-Series GPUs


Gamers hoping to score an RTX 30-Series GPU have so far had to battle with bots, miners, and scalpers in their quest to buy a new GPU, and now we can add one more enemy to the list: highway robbers.

In a post on the EVGA forums, Product Manager Jacob Freeman wrote that one of the company’s shipments of EVGA RTX 30-series graphics cards was recently stolen from a truck en route from San Francisco to its Southern California Distribution Center. We have no way of knowing if the entire truck’s contents were taken, or how many GPUs constitute a “shipment,” but it’s just another sign of the times that GPUs are so valuable people would pull off a seemingly brazen plan to snag a batch of them.

Freeman noted the shipment contained cards with values ranging from, “$329.99 up to $1959.99 MSRP,” so it sounds like the shipment contained the entirety of the RTX lineup, from the midrange RTX 3060 all the way up to the flagship RTX 3080 Ti. It’s unclear if they were stolen from a parked truck, or if it was a Heat scenario where well-trained GPU mercenaries wearing hockey masks rammed the truck and knocked it over, then removed the back door with military-grade explosives. That said, since Freeman didn’t indicate any of the GPU couriers were injured, we can assume the truck was probably broken into when left unattended, but there are no more details at this time as Freeman’s post is quite brief.

This is just another plot twist in the GPU shortage we’ve all been living through the past year or so, where expensive GPUs are practically impossible to find, at least at semi-normal pricing. Scalpers and crypto miners have sucked up all of the already-limited supply, making it impossible for actual gamers to get their hands on a new GPU. It’s even caused Newegg to implement a lottery just for the chance to pay full price, or more, for a new GPU, and they also typically bundle GPUs with expensive accessories like motherboards and monitors. It’s great if you’re building a new system, provided you don’t get a faulty power supply, but if you just want a lone GPU your options are sorely limited.

Freeman notes in his post that it’s against both state and federal law to buy stolen property, and it’s also against the law to, “conceal, sell, withhold, or aid in concealing selling or withholding any such property.” So if you know anything about this heist, or someone connected to it, you can email with any information. Freeman didn’t include any type of reward or bounty for help finding the hijacked graphic cards, but we can sure think of something that people might want.

Now Read:

November 5th 2021, 12:53 pm

Overwatch 2, Diablo IV Delayed Indefinitely


(Photo: Activision Blizzard)
Activision Blizzard is pushing out production of Overwatch 2 and Diablo IV to afford its creative teams more time to develop both games.

In what looks like a presentation slide shared by Axios reporter Stephen Totilo on Twitter, Blizzard states it’s “now planning for a later launch for Overwatch 2 and Diablo IV than originally envisaged.” Though the company has reportedly made great strides in wrapping up the games, the extra time will allow its teams to “continue growing their creative resources to support the titles after launch.” Blizzard admits this will impact the financial boost it expected to see in 2022.

Unfortunately for fans, this isn’t the first time Overwatch 2 and Diablo IV have been delayed. Both games were originally slated to come out sometime this year, though Blizzard shared during its Q4 2020 earnings call that this would no longer be the case. Neither title was given a new release date at that point, meaning those waiting to play Overwatch 2 and Diablo IV have been floating in video game purgatory for a while. 

Though Blizzard claims its delay is for the betterment of the games themselves, it’s likely that an internal shuffle could be forcing the company to slow down operations. Blizzard was the focus of a sexual harassment lawsuit filed by the State of California in July, in which it was revealed that female employees have faced discrimination, verbal abuse, and physical assault from colleagues and managers for a few years. Since the company’s toxic inner workings were made public, several members of leadership have stepped down—including president J. Allen Brack in August, and his replacement, co-head Jen Oneal, who announced this week that she’s leaving after only three months on the job. Two creative leads assigned to Diablo IV were also removed from Blizzard shortly after the lawsuit made headlines. One can only imagine that with such high-level churn (and, hopefully, various efforts to make Blizzard a healthier and safer place to work), it’s difficult for the teams in the trenches to get a whole lot done.

This isn’t the end of Blizzard’s negative PR, though. The earnings call in which the company first announced its recent delays also mentioned its cross-functional user base has plummeted by 13 percent (something a lack of major releases won’t exactly help). While the harassment suit and the original Overwatch 2 and Diablo IV delays have certainly contributed to this decline, some suspect that Amazon’s release of New World, a World of Warcraft competitor, may also be partly to blame.

Now Read:

November 5th 2021, 10:22 am

Nintendo Will Produce 20% Fewer Switch Consoles Due to Chip Shortages


(Photo: Ehimetalor Akhere Unuabona/Unsplash)
Nintendo predicts that its Switch output this fiscal year will be 20% less than what it cranked out in 2020, thanks to the chip shortage we’re all too familiar with by now. Instead of producing the 30 million units it originally planned, the company will likely produce about 24 million by the end of March, Nikkei Asia reports

Demand for the Switch has been high since Nintendo announced a new OLED model back in July. Pre-orders for the new Switch sold out within a few weeks, and the OLED model has received overall glowing reviews, resulting in pretty consistent interest in the months since. Demand for the Switch and Switch Lite was also high last year when people were a bit more desperate for at-home entertainment, but the market’s inability to keep up wasn’t chip-specific back then; manufacturing had generally slowed due to government mandated stay-at-home orders, so retailers weren’t able to fill stock as quickly as they would have liked.

Now Nintendo is facing a perfect storm of market complications that has practically ruined the company’s ability to meet the production goal it set last year. Not only do consumers want the new OLED model, but a steep decline in the global chip supply has forced Nintendo into a bit of a waiting game. As various gift-giving holidays approach in tandem with a record number of retail items out-of-stock, the immediate future isn’t looking particularly bright.

The new(er) Nintendo Switch OLED model. (Photo: Jeremy Bezanger/Unsplash)

The recent chip crunch has rocked nearly every corner of electronics manufacturing since early this year. Apple has even begun shifting its chip supply away from its iPads to feed demand for the new iPhone 13. The shortage, which is expected to continue through at least mid-2022, has turned the tech and automotive industries into unanticipated allies, as both are struggling with complicated supply chain issues (and, as a byproduct, mass consumer disappointment). 

The Switch, Switch Lite, and OLED model all contain the Tegra XI Mariko processor, which will reportedly no longer be produced after this year. Nintendo had the opportunity to swap in a new chip when it came out with the OLED model, but the latest edition’s internals remained the same as those of its predecessors, raising questions about the beloved handheld’s future. Many Nintendo enthusiasts are continuing to hold out hope for a 4k-compatible “pro” Switch model, which would require the video game company to pivot toward a different processor. 

Until then, the Switch will likely join the ranks of other gaming consoles (like the Xbox Series X and PlayStation 5) that have been tough to obtain this year.

Now Read:

November 4th 2021, 2:47 pm

Astronomers Now Have a Better Idea Where Planet 9 Isn’t


Nobody has found Planet Nine yet, but at least we've almost figured out where to look. Image: NASA

There was a time when our solar system had nine planets. Then, in 2006, astronomers decided Pluto didn’t count anymore. There might still be nine planets, though. Astronomers Mike Brown and Konstantin Batygin have proposed an undiscovered ninth planet in the outer solar system. Efforts to find the alleged planet are still in the early phases, but Brown and Batygin just finished a major survey with the Zwicky Transient Facility (ZTF) telescope. They didn’t find Planet 9, but now we know where not to look, says Bad Astronomy

Most of the planets in our solar system are visible with the naked eye at least occasionally, but Planet 9 is theorized to orbit far beyond Neptune. If it exists, it’s so far away that we can’t get a good look at it even with huge telescopes. In the void of interplanetary space, its only company would be small Trans-Neptunian Objects (TNOs). The clustering of TNO orbits is what initially drew Brown and Batygin to the possibility of Planet 9. Everything else, though, is a guess. 

Brown and Batygin used a number of educated guesses to develop software to simulate Planet 9 for various values of size, reflectivity, and orbital shape. The ZTF isn’t a particularly big telescope at just 1.2 meters, but it can view a huge 47×47-degree patch of the sky in each image. It looks for objects that move or change brightness, making it perfect to look for some versions of Planet 9. 

The Zwicky Transient Facility (ZTF) began operation in 2017 and has a huge 47-degree square field of view.

Because we’re still uncertain where Planet 9 is (if it is at all), Brown and Batygin had to start someplace. There’s a possibility Planet 9 is smaller and closer to us, and simulations showed that it would appear in ZTF images about half the time if that were true. The pair used the software simulation to create a database of positions and brightnesses for Planet 9 and scanned the ZTF database for a match, going back to when it began operations in 2017, but they didn’t find one. 

This doesn’t mean that Planet 9 is a fantasy — it just means it most likely won’t be found in a closer orbit. That means we’re looking for something larger than Earth orbiting billions of kilometers away. There is hope that the upcoming Vera Rubin Observatory with its 8.4-meter mirror will be able to pin down the location. Either way, at least we know where Planet 9 isn’t, which is a step in the right direction. 

Now Read:

November 4th 2021, 12:17 pm

Farewell, 14nm: Intel Launches Alder Lake


Intel’s Alder Lake launches this morning, and it’s a momentous occasion for the chip giant. For over six years, Intel’s desktop processors have been stuck on the 14nm process.

Intel always planned for the first 10nm CPUs to be mobile processors, but the company didn’t anticipate how long it would take to build a viable version of a 10nm chip in the first place. Alder Lake is built on an advanced version of Intel’s 10nm process that the company is now referring to as Intel 7, and it includes the first hybrid CPU cores that any x86 manufacturer has ever shipped, with a mixture of big and bigger cores.

ExtremeTech’s full review of Alder Lake isn’t quite ready for this morning’s 9 AM NDA lift, but we’ve spent a lot of time with the core the past week or so, especially in comparison to AMD’s Ryzen 9 5900X. We’ll have full numbers up shortly, but one thing I wanted to say in time for the actual launch: Alder Lake finally, finally moves the ball forward from the 14nm era.

Intel’s Core i9-12900K is extremely well-positioned relative to chips like the Ryzen 9 5900X, with generally faster performance than AMD’s competitor across a range of applications. It balances these impressive wins with a few problems. There are problems with some games not-launching because Intel’s hybrid CPU cores aren’t picked up properly in Denuvo. Sometimes a workload can wind up stuck on hybrid cores if you don’t position desktop windows appropriately. The latter is trivial to fix by rearranging windows and Intel says incompatible Denuvo titles are being patched as quickly as possible.

While the 12900K opens a gap between itself and the Ryzen 9 5900X, the latter still appears to lead on power consumption. The shift to 10nm (Intel 7) saves Intel some power compared to Rocket Lake — not exactly a surprise — but hasn’t enabled Intel to quite catch AMD on power. More on that in the near future, but AMD isn’t exactly out of this fight from a power efficiency standpoint. This makes the company’s recent comments about power efficiency and its own ability to fight Intel with conventional Ryzen cores a bit more interesting.

It’s not all upside for Alder Lake. The platform is going to be quite expensive at launch given the cost of Z690 motherboards and DDR5 memory. Availability will probably be initially limited given how rapacious scalpers have been this past year, and that may add to pricing pressure. We’ll have benchmark-backed details on the two cores in the near future.

On the whole, though, Alder Lake is very good. It improves on all of the areas Intel needed to improve on and it moves performance forward in absolute terms. High prices may deter some customers — they certainly don’t hurt AMD’s competitive standing, since its chips rely on mainstream DDR4 — but they won’t be a meaningful barrier to long-term adoption. DDR5 prices will fall as the memory goes mainstream.

After dominating the desktop CPU performance charts with Zen 3 for a full year, AMD is going to be facing real competitive pressure on the desktop again. Given historical trends, this is likely to lead to good things for desktop enthusiasts and downward pressure on CPU prices.

Now Read:

November 4th 2021, 12:17 pm

Modder Open Sources iPhone USB-C Mod


As humans, we all have basic needs, including food, water, shelter, and a USB-C iPhone. Okay, that last one isn’t strictly a need, but a lot of people want a USB-C iPhone so much they’re willing to embark on an expensive and complex mod project. Recently, engineering student Ken Pillonel showed off his amazing USB-C mod for the iPhone X. Now, he’s decided to open up the mod to all interested parties. There’s a GitHub repository with technical details, plus a video explainer. If you were hoping it was as simple as drilling a hole and soldering a few wires, you’re in for disappointment. There’s a reason it took an aspiring robotics engineer to devise the mod. You could also buy the original prototype, which is being auctioned on eBay. 

Apple was early to adopt USB-C for its laptops, and the new port standard then expanded to the company’s high-end iPad Pro line. However, the iPhone and the base model iPad still using a Lightning port. So, Apple enthusiasts are forced to keep multiple cables around. The desire for a USB-C iPhone is so intense that Pillonel’s mod got mainstream attention. The original YouTube video, which is a mere 39 seconds in length, has already blown past one million views. 

As promised, the longer demo video is now available. The 14-minute video details the entire project from setting goals to designing a custom internal PCB. Just figuring out how to connect the iPhone internals to a new port took a lot of trial and error, calling for the sacrifice of many Apple-certified Lightning cables. Eventually, Pillonel decided he needed to reverse engineer the Apple C94 Lightning controller, which is inside each of those cables. That component needed to move into the iPhone, and there’s no room for it. Luckily, there are Chinese knockoff C94 Lightning plugs that were easier to study. 

Eventually, Pillonel designed a custom flexible circuit board to go inside the phone, linking the USB-C plug with the Lightning-locked internals—so both charging and data transfer are functional. Unlike the stock components, this circuit is mounted vertically between the battery and Taptic Engine. This chip design is based on the fake C94 components from China, so Pillonel is confident he can release the technical details without fear of legal threats from Apple. You can find this and everything else you need in the GitHub project. Well, you won’t find the technical skill you need to make it happen.

If you’re desperate for a USB-C iPhone and reworking circuit boards isn’t in your skillset, you can buy the original USB-C iPhone from Pillonel. It’s on eBay now with a week left in the auction, as of this posting. However, the price has already reached nearly $5,000. It seems plenty of people seem to value a USB-C iPhone at least as much as food or shelter.

Now Read:

November 4th 2021, 12:17 pm

Facebook Planned to Target Six Year Olds to Compensate for Teen Departures


(Photo: Alex Haney/Unsplash)
Facebook’s user base has been shrinking, with nearly half of its current users expected to drop off over the next two years. The company’s solution? Targeting six year olds. 

Based on a new document released by whistleblower Frances Haugen, Facebook (now known as Meta following a fresh rebrand) appears to have been planning to recruit six year olds to various apps within its portfolio. The first page within the document is a photograph of a job ad for various child research roles under a “youth initiative” umbrella. Though the ad says the initiative “prioritizes the best interest” of children and “recognizes the responsibilities it brings,” the document quickly jumps to photos of an internal memo detailing Facebook’s plan to create features and settings that are tailored to children as young as six. The memo does not state what these features or settings would look like, but the company has previously faced a backlash for targeting youths through a proposed Instagram for Kids.

Facebook admits in its memo that it hasn’t historically prioritized young children due to the Children’s Online Privacy Protection Act (COPPA), which is intended to protect those under the age of 13. While COPPA doesn’t prohibit websites and mobile apps from collecting data on children as frequently thought, it does mandate that such operators provide parents with ample opportunity to restrict what data is collected. It also requires operators to limit the amount of time it keeps children’s information and to prioritize data integrity. Given these tough (read: perfectly reasonable, even for adult users) guidelines, Facebook hasn’t bothered to create online experiences for kids beyond its Messenger Kids app.

(Photo: Pan Xiaozhen/Unsplash)

However, given how many young users are ditching Facebook, the company apparently decided to begin targeting kids who would have only just begun learning how to read. The memo released by Haugen implies that Facebook had begun dividing focus markets into age groups, including ages six through nine and 10 through 12. 

“Our company is making a major investment in youth and has spun up a cross-company virtual team to make safer, more private experiences for youth that improve their and their households’ well-being,” the job posting from the memo reads. It’s worth noting that beyond the general risks social media poses to mental health, the Facebook-owned Instagram app has recently been found to uniquely harm young people’s self-image. This begs the question: Can a social media experience aimed at children in early stages of development really benefit their well-being?

Now Read:

November 3rd 2021, 6:50 pm

Epic Shuts Down Fortnite in China


Fortnite is the most popular game in the world, but Epic has decided to give up on a huge potential market. After experimenting with a Chinese beta of Fortnite called “Fortress Night,” the company has decided to shut the game down on November 15th. New account registration ended on November 1st. Epic has not provided a reason for the shutdown, but we can safely assume it has something to do with the Chinese government’s tightening restrictions on video games. 

For the uninitiated, Fortnite is a battle royale game that drops players into an area that shrinks over time. The group of players are slowly whittled down until someone becomes the last one standing. Epic didn’t invent this formula, but it has easily blown past all the competing games in the genre. Despite having a more cartoony vibe than most other popular shooters, Epic took things very slow in China. The Fortress Night test didn’t even include in-app purchases, which is how Epic makes money on the free-to-play game. 

China has traditionally taken a hard line when it comes to video games — it didn’t even allow consoles in the country from 2000 until 2015. It has also come down hard on publishers attempting to release games in the country. Government officials must review all games before they’re released and can demand changes to remove violence, sexual content, and gambling. In fact, China’s version of PUBG (another popular battle royale game) was retooled several years ago into the confusingly named Game for Peace. There’s no gore, and instead of dying, defeated players wave goodbye and give you a loot box. The government has also restricted kids to three hours of online gaming per week, and is also going so far as to push facial recognition to make sure no one is skirting the rules by sharing login credentials. 

The game’s official notice of change simply says the test has come to an end and that the servers will shut down. The only important info is the November 15th shutdown date, and of course, there’s no explanation for the end of testing. The lack of details is suspicious but not surprising — even in failure, Epic has to be gracious in China.  

Epic launched Fortress Night with the help of local megacorporation Tencent, which owns a 40 percent stake in Epic. Numerous foreign firms have attempted similar maneuvers, partnering with a local company that has a history of productive cooperation with Chinese regulators. Sometimes it works and companies find new audiences in China. In this case, though, it apparently wasn’t enough. It’s possible Chinese censors wanted too many changes or control over the game before it would allow Epic to add the in-app purchases that have made it so much money in the west. Chinese gamers will just have to make do with waving corpses.

Now Read:

November 3rd 2021, 6:50 pm

Switch Modder Pleads Guilty to Piracy, Will Pay Nintendo $4.5 Million


(Photo: Alvaro Reyes/Unsplash)

A few years ago, a group called Team Xecuter began releasing mod kits for the Nintendo Switch. That’s nothing new — cracking game consoles to run homebrew software has been common for ages. However, most people modding their consoles do so to play pirated ROMs, and Team Xecuter made the mistake of leaning into the piracy angle in private while publicly promoting homebrew development. With the feds in possession of some damning emails, one member of Team Xecuter has pleaded guilty to criminal copyright infringement and will pay Nintendo $4.5 million in damages, according to documents reviewed by TorrentFreak

Team Xecuter started its Switch project with a USB dongle that could load the custom SX OS on the device, but that only worked on early units with a flawed Tegra chip. Newer hardware required a mod chip that you would install inside the console. Nintendo sued to stop the sale of these devices, but that is separate from the criminal investigation that ensnared Gary Bowser, known online as GaryOPA. The 51-year-old was arrested last year, along with French national Max Louarn, who is still awaiting extradition to the US. A Chinese national named Yuanning Chen is also sought but is most likely beyond the reach of authorities. 

Bowser, who shares a fitting last name with Nintendo of America president Doug Bowser, has pleaded guilty to two of the original 11 charges; conspiracy to traffic in circumvention devices, and actually trafficking in them. Key to the government’s case was email correspondence in which Bowser admitted he was spending time developing “underground” stuff for SX OS that would make popular pirated ROM repositories easier to use. He also explored hosting ROMs on a secure server to further streamline the process. This undercut Team Xecuter’s public claim that it was merely providing tools for running homebrew software. 

Team Xecuter’s SX Core mod chip for the Switch.

Nintendo claims it lost between $65 and $150 million due to the Xecuter scheme, but that’s impossible to know. Some people probably did only want the SX kits to run their own custom software or backed-up content, but SX OS still circumvented copy protection systems. Like it or not, the law is pretty clear on this sort of endeavor.

While Bowser says he only made about $320,000 for his part in Team Xecuter, he’s agreed to pay Nintendo $4.5 million in damages. Bowser could also get slapped with another $750,000 in criminal fines at sentencing. The court could also impose up to five years in prison for each of the two guilty pleas. Bowser hasn’t been officially promised leniency, according to sentencing documents.

Now read:

November 3rd 2021, 6:50 pm

Jupiter’s Great Red Spot Far Deeper Than Previously Known


Jupiter’s Great Red Spot (GRS) is a ten-thousand-mile-wide tempest that has been roiling in the planet’s atmosphere for centuries, but new data from a pair of studies unveiled at a NASA press conference has shown that the GRS and its feeder currents extend deeper into the gas giant than we previously knew.

At the press conference, two teams of astronomers — Scott Bolton and colleagues as well as Marzia Parisi and colleagues, reported that they used microwave radiometry and gravity measurements, respectively, to characterize Jupiter’s atmospheric vortices, including the GRS. Both teams relied on the space probe Juno — Bolton’s group used Juno’s onboard microwave radiometer to characterize the vertical structure of the GRS (as well as two other storms) and found that they extend well below the planet’s cloud layer.

In terms of its footprint, the GRS is more than capable of swallowing the Earth entire. It is smaller now than it used to be, and faded, but it’s still awful. Conservatively, the maelstrom extends at least 120 miles into the planet, but it could be as much as 300 miles deep — it could go even further, but that’s as deep as Juno’s radiometer can “see” it. “The Great Red Spot is as deep within Jupiter as the International Space Station is high above our heads,” said Parisi, a research scientist at the JPL. But compared to the zonal jets that feed the storm, the GRS is just a surface eddy. Those zonal currents cut almost two thousand miles deep into Jupiter’s lightless depths.

An infrared image of Jupiter’s north pole, taken by Juno’s Jovian Infrared Auroral Mapper (JIRAM) instrument. Image: NASA/JPL-Caltech/SwRI/ASI/INAF/JIRAM

Gigantic, stable, long-lived storms are a recurring theme on Jupiter, and with Juno we finally have a chance to look below the tops of the clouds. The planet is mostly hydrogen by mass, and it has no true surface; instead, we think it makes a gradual transition through thicker and denser clouds into an opaque, ammoniac slush with perhaps the density of water. At a thousand kilometers below the cloudtops, we observe such extreme cold that the distinction between liquid and gas begins to blur. Hydrogen becomes a transparent supercritical fluid, free of surface tension and capable of wafting through solids like a gas instead of seeping like a liquid. But there’s nothing solid on Jupiter. Wind belts carve unobstructed paths through the gas giant’s atmosphere. These stable circulations produce resident storms arranged at the planet’s poles with geometric precision, in a perfect octagon at the north pole, and a perfect hexagon at the south. It is thought that they produce raindrops of helium, neon, and perhaps even diamond, which fall into a bottomless ocean of liquid hydrogen that never becomes solid, even at the planet’s core.

As Juno continues to investigate the planet, we’ll continue to receive updates. Juno is orbiting Jupiter in order to learn about the planet’s origin and evolution, to study its core and magnetic field, and to obsessively photograph everything it possibly can. More data is certain to come, as NASA has extended Juno’s mission until at least 2025.

“Jupiter’s beauty is not just skin deep,” says Bolton, principal investigator of the Juno mission. “And we are seeing, for the first time, the atmosphere in three dimensions.”

Now Read:

November 2nd 2021, 8:17 pm

5-Day Stimulation Treatment Effective Against Depression


(Photo: Natasha Connell/Unsplash)
Researchers at Stanford University appear to have found a way to treat persistent depression using magnetic brain stimulation. In a study published last week, the university’s School of Medicine shared that after treating 29 participants using Stanford accelerated intelligent neuromodulation therapy (SAINT), 78.6% of them experienced rapid remission.

SAINT is a form of transcranial stimulation that involves delivering magnetic pulses to specific areas within the brain. Researchers at Stanford began by using MRI to determine which part of each participant’s dorsolateral prefrontal cortex (the part of the brain that manages executive functions) would receive stimulation. They then applied 1,800 pulses—a level of stimulation usually reserved for treating other neurological disorders, like Parkinson’s—to a subregion that works closely with the subgenual cingulate, a part of the brain that is hyperactive in people with depression. The magnetic stimulation helped strengthen the connection between both parts of the brain. 

Though this isn’t the first time transcranial magnetic stimulation has been used to treat depression, the type of treatment currently approved by the Food and Drug Administration (FDA) involves delivering 600 pulses once per day for six weeks. Unfortunately it isn’t quite as effective as the treatment Stanford seems to have just devised; using the “old” method, only about 50% of patients have experienced alleviated depression, and even fewer have enjoyed remission. 

In contrast, Stanford’s new proposed single-day treatment method involves delivering magnetic pulses over the course of 10 10-minute treatments, with 50-minute breaks in between. Stanford study participants who experienced improvement found themselves feeling better after only five days of treatment. By several standard methods of psychiatric evaluation, the 78.6% of participants who’d seen improvement were no longer considered depressed. 

“It’s quite a dramatic effect, and it’s quite sustained,” said Alan Schatzberg, MD, study co-author and professor of Psychiatry and Behavioral Sciences.

Nolan Williams demonstrating SAINT on a former study participant. (Photo: Steve Fisch)

The treatment—which Stanford has patented and is seeking FDA approval for—is already considered a medical godsend by those who have struggled with more traditional depression treatments. Participants in Stanford’s study had experienced depression for an average of nine years and hadn’t found long-term relief in medication. One had tried an assortment of psychiatric medications, one of which had made him suicidal; another had experienced lifestyle symptoms like procrastination, poor quality of sleep, and reduced motivation to participate in hobbies. Both participants are now experiencing improved mental wellness.

Beyond offering a glimmer of hope to those who have dealt with depression long-term, SAINT may also prove useful in emergency psychiatric applications, according to senior study author Nolan Williams, MD. Patients with suicidal tendencies are often prescribed medication while receiving medical and psychological treatment, but such medications can take weeks (if not months) to produce results. This isn’t particularly useful for patients in desperate need of immediate relief. With how quickly SAINT takes effect, the treatment may offer a glimmer of hope for patients and healthcare professionals alike who are all too familiar with mental health crises.

Now Read:

November 2nd 2021, 1:48 pm

Amazon to Launch First Kuiper Internet Satellites Next Year on Pint-Sized RS1 Rocket


Amazon has been conducting ground-based testing for its upcoming satellite internet constellation, known as Project Kuiper. While the retail giant doesn’t expect to have a fully deployed network until the latter half of the decade, it will begin orbital testing as soon as late 2022 thanks to a launch contract with a little-known entity called ABL Space Systems. The firm will launch two Kuiper satellites aboard its RS1 rocket. 

Amazon isn’t alone in its quest to bring internet access into the space age. Elon Musk’s SpaceX has been deploying satellites as part of its Starlink platform for more than a year. It even launched a beta program earlier this year, and it was so successful that the beta label is now gone. But SpaceX has its own rockets like the Falcon 9, which can haul dozens of satellites into orbit, and then land safely on a boat for refurbishment and reuse. That has made SpaceX’s operations extremely cheap — we don’t know exactly how cheap as SpaceX is a private company, but it charges third parties around $62 million total for a launch. 

Previously, Amazon inked a deal with United Launch Alliance (ULA) to send its first batches of satellites into space aboard nine Atlas V rockets, a less advanced but highly reliable launch system. That will get Amazon on its way to the 3,236 nodes currently planned, but it has until July 2026 to get the first half of the constellation into orbit. The arrangement with ABL is vital, as it will help Amazon get its first two prototypes, called KuiperSat-1 and KuiperSat-2, into space where it can conduct essential tests. 

The RS1 rocket is much smaller than the Falcon 9, clocking in at 88 feet tall versus 230 feet for the SpaceX rocket. It uses nine 3D-printed EP1 engines, which give it a modest payload capacity of about a ton to get into low-Earth orbit (LEO). The Falcon 9 can hoist 15.6 tons into space with enough fuel left over for a gentle landing. However, Amazon says the RS1 will do just what it needs. “RS1 delivers the right capacity and cost-efficiency to support our mission profile,” Amazon said. 

The exact timeline of the launch is up in the air — the companies are aiming for late 2022, but ABL has yet to actually launch any missions with the RS1. Amazon is willing to take a risk here in large part because the rocket is so flexible. It’s fully containerized, making it easy to transport, and the small size means all you need is a 50×150-foot concrete pad and the company’s GS0 launch kit. 

After all that, the vital orbital tests could be over in a matter of minutes. Once in space, Amazon will wait for the prototypes to pass over the Texas test facility. Assessing the speed and stability of the connection will be vital to making Kuiper work. We already know Starlink works, so Amazon has to come to market with a working service if it wants to catch up.

Now Read:

November 2nd 2021, 1:48 pm

Apple’s Most Backordered Product is a $19 Cloth


(Photo: Apple)
Apple has long been accused of overcharging for its products. Here to test that theory is the new Apple Polishing Cloth, a $19 cloth that has somehow become the brand’s most backordered product.

The 6.3-by-6.3-inch cloth became available in October, when Apple typically announces its new products. But unlike Apple’s other recent releases, like the long-awaited M1 MacBook Pro and revamped iPad Mini, the Polishing Cloth joined the brand’s roster of offerings quietly and without any fanfare. Still, the cloth quickly made its way to best-seller status, despite not technically being a new product at all. The cloth has been included in Apple’s XDR Pro Displays before, but it’s now gone on-sale for all Apple devices.

With its light gray non-woven microfiber and embossed Apple logo in the bottom right corner, the Polishing Cloth itself is incredibly unassuming. The product is inspired by the microfiber cloth previously provided for free with the Pro Display XDR, one of Apple’s more powerful monitors with a low-reflectivity surface. Fans of the cloth loved it so much they asked for ways to buy extras, which prompted the company to offer it as a separate product.  

(Photo: Josh Hendrickson/ReviewGeek)

But if there’s one thing Apple is good at, it’s marketing. The cloth’s product page says it can clean “any Apple display, including nano-texture glass, safely and effectively,” thus implying any other cloth runs the risk of leaving smudges or scratches behind. The Polishing Cloth is also “compatible” with 88 different Apple products, from iPhones and iPads to various laptop, desktop, and watch models. As for why the Polishing Cloth wouldn’t be compatible with certain items, Apple doesn’t say.

As someone who’s admittedly loyal to Apple largely due to its products’ aesthetics, even I find the idea of a $19 branded microfiber cloth to be a bit absurd. But the accessory is relatively cheap compared to other Apple accessories, meaning the company may be pulling the equivalent of a supermarket that stocks candy and gum near the cash registers—what’s one last-minute accessory when you’re already committing to a major purchase? 

Of course, Apple’s strategy with its Polishing Cloth may not be so complicated. After all, the cloth’s product page sits alongside that of the Apple Mac Pro Wheels Kit, a set of four casters intended for adding “improved mobility” to one’s Mac Pro for the low, low price of $699. Apple’s accessories may just be special because they’re expensive, not in spite of the fact. 

Now Read:

November 2nd 2021, 1:48 pm

5D Optical Disc Could Store 500TB for Billions of Years


Hard drives and flash storage have gotten more reliable over the years, but only on a human timescale. What if we need data storage that lasts longer? Decades? Millennia? The key to that vision might be 5D optical storage, which has a data density 10,000 times that of a Blu-ray disc. But it’s always been far too slow to write data onto glass plates in this way—until now. A new technique developed at the University of Southampton speeds up the process dramatically, without impacting the reliability of the data. 

This type of data storage uses three layers of nanoscale dots in a glass disc. The size, orientation, and position (in three dimensions) of the dots gives you the five “dimensions” used to encode data. Researchers say that a 5D disc could remain readable after 13.8 billion years, but it would be surprising if anyone was even around to read them at that point. In the shorter term, 5D optical media could also survive after being heated to 1,000 degrees Celsius. You can see an earlier, smaller version of the disc above. 

This is not the first time 5G optical data storage has popped up. It was just impractically slow before. Data is added to the discs with lasers, but if the laser moves too fast, the disc’s structural integrity is compromised. The technique devised by doctoral researcher Yuhao Lei uses a femtosecond laser with a high repetition rate. The process starts with a seeding pulse that creates a nanovoid, but the fast pulse doesn’t need to actually write any data. The repeated weak pulses leverage a phenomenon known as near-field enhancement to sculpt the nanostructures in a more gentle way. 

The researchers evaluated laser pulses at a variety of power levels, finding a level that sped up writing without damaging the silica glass disc. The study reported a maximum data rate of one million voxels per second, but each bit requires several voxels in 5D optical systems. That works out to a data rate of about 230 kilobytes per second. At that point, it becomes feasible to fill one of the discs, which have an estimated capacity of 500TB. It would take about two months to write this much data, after which it cannot be changed. 

This work is still in the early stages, but the team managed to write and retrieve 5GB of text data using a 5D optical medium. All you need to read the stored data is a microscope and polarizer, and it should be readable for eons. We might not have anything interesting enough that needs to be saved for a few billion years, but maybe we will someday.

Now Read:

November 1st 2021, 6:33 pm

New Analysis of Iconic Miller-Urey Origin of Life Experiment Asks More Questions Than It Answers


How did we get here? It’s no small question. Scientists have been hacking away at the origin of life ever since we opened up “science” in the human skill tree. In 1952, two chemists conducted an experiment designed to brew up a kettle of primordial soup, and in doing so, they began to probe the circumstances under which life arose on Earth. Their work still bears their names: the Miller-Urey experiment inspired countless other studies, and it’s in every freshman biology text. But a new analysis of the OG experiment has concluded that one component of the primordial soup must have come from an unexpected source. The analysis is compelling and peer-reviewed, and it raises more questions than it answers.

The groundbreaking Miller-Urey experiment was designed to test the idea of “abiogenesis,” which is the notion that life could come from that which was not previously living. The idea goes like this: if life came from the primordial soup, and we’re living but the soup wasn’t alive, then at some point there must have been some kind of transition from nonlife to life. Life is made of cells. (Don’t get me started on viruses.) Cells are made of polymers, which are made of monomers, which are made of yet smaller, simpler, building-block molecules. There should be some knowable transition from life to unlife, somewhere in the cosmos we live in, for us to witness and understand.

But the early Earth was very different from the one we occupy, enough so that it confounded our experiments at first. One important difference is the atmosphere: before the Great Oxygenation Event, our planet had a reducing atmosphere, made of things like hydrogen, methane, and ammonia. The difference is so great between our current atmosphere and the primordial atmosphere, in fact, that entirely separate classes of chemical reactions are favored. Still, methane and ammonia contain carbon and nitrogen, which are the necessary raw materials for the backbone of all known proteins and amino acids. Add in gaseous hydrogen and you’ve got the materials to make hydrocarbon chains, sugars, and even nucleic acids. So Stanley Miller and his advisor Harold Urey sealed those gases inside a sterile glass vessel, which was connected to another smaller glass bubble containing water. Heating the water made steam, which mingled with the reducing gases to make a microcosm of what we believed the atmosphere was on the primordial Earth. The resulting clouds swirled around electrodes that sent a spark across a gap, over and over, mimicking lightning from ancient storms. A cooling clamp allowed the vapors to condense into a tiny, domesticated version of the corrosive primordial rain, which puddled in a collection chamber below.

The resulting solution contained five amino acids, reported the two chemists; of that the chromatography was unequivocal. Their report acknowledged also that there was weak evidence for the presence of two other amino acids, but too weak to make a definitive claim. In 2007, Miller’s scientific successor, Jeffrey Bada, and several colleagues examined the original chromatography slips from the 1952 experiment. They determined that the original report had been, if anything, too conservative: There had been not just five or seven amino acids created in the reaction chamber, the 2007 analysis reported, there were twenty-five. But looking at the intact setup from Miller and Urey’s work, one coauthor of the 2007 analysis, Joaquin Criado-Reyes, realized that even more must have been going on inside those sealed reaction vessels.

The original Miller-Urey experiment.

Borosilicate glass, sometimes called Pyrex, is a kind of extremely tough glass often used in labware because of its resistance to breakage, corrosion, and other kinds of abuse. Usually a good scrub with some Alconox is enough to clean up lab glass so that it shines like the day it came out of the box. What’s more, glass is often what you store the acids and solvents in before you use them to clean off other, lesser tools. But in the 1952 experiment, in addition to making amino acids, electrifying those reducing vapors produced a highly corrosive, highly alkaline stew of chemicals sufficient to etch and pit the glassware itself. To get a better look, Criado-Reyes and colleagues ran the Miller-Urey experiment again, but in three parallel trials. One used the original glassware from the 1952 work, one used a Teflon vessel, and one used a Teflon vessel with broken chips of borosilicate dropped in. (Teflon is waxy, hard, and obnoxiously nonreactive, and when even glass can’t safely contain a chemical, often Teflon can.) At the end of the experiment, the kinds of organic molecules created inside all three reaction vessels matched. But the quantities didn’t. The Teflon vessels contained less of everything.

The imbalance still supports Miller and Urey’s original work. Etching the glass would have dissolved some of the silicon dioxide. Liberating some silicate into solution creates a twofold catalyst, by way of the silicate molecules themselves as well as the corroded, pitted surface they leave behind. So the Teflon vessels have extra reason not to turn as much yield as the glass. But the results from the 2021 analysis do call other things into question.

One answer to the Fermi paradox is a dismissive hand-wave at the sheer improbability of life. (Never mind that to the best of our knowledge, life does in fact have a 100% probability of existing, which we know because we are here to scratch our collective heads about it.) While chemistry teaches us that higher-order reactions become less likely and therefore harder to find, by virtue of having to line up several different sets of circumstances, the raw components of the primordial soup experiments are not rare. We’ve observed lightning on Jupiter, which has an atmosphere of hydrogen, helium, methane, and ammonia. Rocky planets are common, and with them, the same silicon dioxide that’s in Pyrex. And nearly every star has a “snow line” in which it might be possible to find liquid water. According to the most recent (2018) NASA analysis of data from Kepler, up to half the stars in our galaxy could have terrestrial planets within their habitable zones. The ubiquity of these elements weakens the “rare earth” hypothesis, without necessarily being courteous enough to provide any more information. To begin the Rube Goldberg chain of reactions that eventually produces amino acids from dissociated organic and inorganic molecules, what you need is just rocks, water, and lightning, but coastal storms are not hard to find. So what gives?

The answer is also one big reason we have a scientific interest in exploring places like Titan and Enceladus. Polymers, which are long molecules which can encode information in their sequence, are thought to be essential to life. But there may be another chapter to the story. Miller and Urey didn’t stop work with just one batch of primordial soup. In another experiment, they shot vapors from a nozzle at a spark gap. That work produced almost two dozen amino acids, plus amines and other hydroxylated species. Where else in the universe do you find lightning getting shot through pressurized water vapor? Cryovolcanoes, like those found on the moons of Saturn, represent one place where conditions exist that can produce polymers. We may well need to cast a scientific eye toward the colder reaches of our solar system, to understand how life formed here on Earth.

Now Read:

November 1st 2021, 6:33 pm

Major Solar Flare Could Cause Spooky Aurorae on Halloween


Earth would be a lifeless husk without the warming rays of the sun. But like they say: you have to take the good with the bad. The sun occasionally fires off massive flares and eruptions that can cause problems here at home. A major solar flare this past week caused radio disruptions, and the cloud of charged particles it released is expected to reach us in the coming days when it could cause even more radio interference. The upshot, though, is some spooky boosted northern lights for Halloween. 

NASA reported the flare on October 28th, saying it had classified it as an X1 event. That’s the most potent class of solar flares, although it’s not the most powerful level. An X1 flare is the bottom rung of the X-class. An X2, for example, would be twice as powerful as this one, and an X3 would be three times as intense. A flare at the X10 level is considered unusually dangerous and powerful, but even an X1 is nothing to scoff at. 

Powerful solar flares are often associated with coronal mass ejections (CME). This occurs when a loop of plasma rises off the sun, known as a prominence. Instead of collapsing back into the nuclear inferno, a large volume of that plasma can break free and race off into space. If the flare is facing Earth, those particles can reach us and cause changes in the atmosphere that affect electronic systems. 

According to NASA, the initial flare causes radio distortion on the sunward side of Earth, centered on South America. You can see the flare in the animation above, but most of the particles that reach Earth in the coming days won’t emit visible light. You’ll know when they get here, though. 

Scientists expect the CME to hit the atmosphere by Saturday or Sunday (Oct. 30-31. When that happens, the particles will ramp up aurorae around the poles. When this solar storm hits, the aurora borealis (also known as the Northern Lights) will become much more intense. While visible aurorae are usually restricted to the northern latitudes, this weekend could see intense lights extending all the way down into the northern United States including Illinois, Minnesota, Oregon, and much of New England. Canada will also get a good show, as will Iceland, Norway, and Scotland. The atmospheric glow might even stick around for Halloween night, which is maybe worth the small inconvenience of radio and satellite disruption.

Now read:

November 1st 2021, 1:17 pm

5 Best Android Phones


Android phones aren’t getting any cheaper. In fact, they’re getting even more expensive with every release cycle. The mobile marketplace isn’t alone in this trend, but the right phone could last long enough to justify the ever-increasing cost. What’s the right one? Well, we can point you in the right direction. Here are the five best Android phones available right now. 

OnePlus Nord N200

Starting in the budget category, the OnePlus Nord N200 offers a 1080p screen with a 90Hz refresh rate, a capable Snapdragon 480 chip, and impressive build quality. OnePlus is in the midst of developing a new Android platform shared with its parent company Oppo, but the Oxygen OS build that ships on the N200 is a known quantity, and it’s great. The Nord doesn’t have a ton of bloatware, and most of the customizations are useful. 

It’s not the fastest phone, but it’s plenty fast for its $240 asking price. It’ll outperform most devices in this price range, and it even has 5G. Although, only T-Mobile has certified the Nord for its 5G network at this time. The unlocked Nord will still work on 4G with AT&T and Verizon, too. T-Mobile will also give you the phone free with a new line.

Google Pixel 5a

If you want a premium smartphone experience and don’t fancy spending a lot of money, the Pixel 5a should be your first choice. You won’t get the kitchen sink, but the Pixel is all-around competent. There are even some features, such as the camera, that compete with much more expensive phones. The 12MP shooter might not sound impressive on paper, but Google’s photo processing is second to none. The images you get from the 5a are stupendous in any lighting conditions. Plus, Google’s version of Android is clean, fast, and gets quick updates. While other devices are languishing on Android 11, the 5a will head into late 2021 with Android 12. 

The Pixel 5a retails for $450, which is a good value for what it offers. The only drawbacks are you can only get it in one color (a sort of greenish-black), and there’s no in-display fingerprint sensor or high-refresh display. The minty power button is the only splash of color or distinctiveness on what is otherwise a very boring-looking device. 

Google Pixel 6

After years of beating around the bush, Google is finally taking its flagship Pixel phones seriously. The Pixel 5a is great, but for only $150 more, you can get Google’s latest high-end phone. No more mediocre screens and small batteries—the Pixel 6 has the best that Google has to offer, including Android 12, the Google Tensor custom processor, and an all-new camera array (50MP primary and 12MP ultrawide)  that takes incredible photos. It’s not as versatile as the S21 Ultra’s quad-camera setup, but you will get more photos you actually like out of the Pixel 6.

The Pixel 6 runs on the octa-core Google Tensor chip, 8GB of RAM, 128-256GB of storage, and a 4,612mAh battery. The 6.4-inch display is only 1080p, but it’s crisp, bright, and very smooth thanks to the 90Hz refresh rate. This is also Google’s first phone with an in-display fingerprint sensor, and while it’s not as fast as Samsung’s ultrasonic sensor, it’s a big step up from the cheaper rear-facing sensors used on the older Pixels. Google is only asking $599 for the Pixel 6, which makes phones like the base Galaxy S21 and Motorola Edge look like yesterday’s news. The only real drawback is availability. The Pixel6 is sold out everywhere, and it may be a few months before supply evens out.

Galaxy S21 Ultra

If even the Galaxy S21 isn’t enough for your discerning tastes, the S21 Ultra is like that phone on steroids. The S21 Ultra dunks on almost everything we’ve talked about so far with a fantastic quad-camera setup, featuring two different optical zoom levels and a 108MP primary camera. The only competition is the Pixel 6, which isn’t as good at shooting at long range. The S21 Ultra also sports the best screen you can get on a phone — 6.8 inches, 1440p, and 120Hz refresh rate. It also supports Samsung’s S Pen stylus, one of just a handful of devices that do so. It’s an enormous phone, though, tipping the scales at 229g and 165mm tall. By comparison, the S21 is only 151mm tall and 169g. 

The S21 Ultra has the same software as the base model, which is better than Samsung’s version of Android used to be. It’s still not a match for the Pixel, though. The hardware pushes the price tag into the stratosphere. The retail cost is $1,200, but it often goes on-sale for a few hundred bucks off. 

Galaxy Z Fold3

The Galaxy Z Fold3 is the Cadillac of smartphones: it’s large, expensive, and uncompromising. The external 6.2-inch display is a little narrow and hard to use, but open the Fold3 up and you have a 7.6-inch folding OLED. It’s basically a small tablet that folds up and fits in your pocket. The software has been optimized for the larger screen with enhanced multi-window support and a special UI mode for select apps. It’s not so large that unoptimized Android apps look comical, though. 

The Fold3 is without a doubt the most capable Android phone you can buy, but you’ll have to pay handsomely to have this brick of a phone in your pocket. The $1,800 price tag makes this a non-starter for almost everyone buying a new phone, but you wait for it to go on-sale or have a phone to trade-in, the Fold3 can get close in price to the S21 Ultra. But the Pixel 6 is still the elephant in the room at just $600. 

November 1st 2021, 1:17 pm

Future 256-Core AMD Epyc CPU Might Sport Remarkably Low 600W TDP


There’s a rumor making the rounds that AMD’s future Zen 5 CPU family, codenamed Turin, might have a TDP as high as 600W. These future CPUs will supposedly be available in at least two two configurations of 192 cores / 384 threads and 256 cores / 512 threads.

Take this rumor with a mountain of salt. AMD has not said much publicly about Zen 5. With Genoa expected to jump to 96 cores with Zen 4, a leap to 256 cores with Zen 5 would be a large jump indeed — especially since there might not be a die shrink between Zen 4 (5nm) and Zen 5. Alternately, it’s possible that AMD might release a Zen 4 and Zen 4+ before a theoretical Turin CPU launched later.

No matter what, this is a chip we wouldn’t expect to see for another 24 – 36 months. AMD is almost certainly still working on the Zen 5 architecture and considering the configurations it might build relative to where Intel and its potential ARM competition will be in the future.

A number of posts online are referring to a 600W TDP as a “monster,” but that depends on how you evaluate the situation. It’s true that we’ve never seen a desktop or x86 socket that could dissipate this kind of heat, but while 600W is quite high in absolute terms, it’s downright svelte when you consider how many cores this rumor suggests the future chip will offer.

Right now, an AMD Epyc 7763 has a TDP of 280W and 64 CPU cores. This works out to 4.375W per core if you assume the chip’s L3 caches and Infinity Fabric draw no power at all. A future 600W CPU with 256 cores is a chip that allocates no more than 2.34W per core; 1.86x less power than current Epyc CPUs. This implies quite a bit of near-term improvement in AMD’s performance per watt at a time when lithography-based improvements are shrinking every node. Feel free to enjoy some salt mountain at this point.

A CPU with this many cores isn’t going to be aimed at consumer markets. There are so-called “embarrassingly parallel” workloads in computing that can scale to very high core counts, but many of them have been optimized for GPU execution over the last decade. Perhaps more pertinently, a system with 256 cores and the same eight cores per memory channel ratio as current Epyc CPUs would need a 32-channel memory interface to keep the cores fed. While there are ways to reduce the need for memory bandwidth, large L3 caches and on-package HBM aren’t cheap.

A rumor like this is difficult to completely dismiss because the claims it makes are in line with some long-term projections for where CPU development is headed. AMD has aggressively increased x86 core counts with Ryzen. It has publicly stated that it adopted chiplets partly for their superior scaling characteristics relative to conventional monolithic designs.

A 256-core CPU isn’t all that large when you consider that the Ampere Altra Max already offers a 128-core single-socket solution. While it’s an aggressive roadmap for AMD given that the company is “only” expected to ship 96-core chips in 2022, one could argue it shows AMD taking the threat of a resurgent Intel seriously, and that the company intends to retain the leadership position it opened against its rival while Intel was stuck on 14nm.

I don’t have any inside information on when/if AMD intends to ship a 256-core CPU, but it’s definitely something the company has thought about. Silicon development timelines are long and engineers are accustomed to planning for what’s likely to be available 24-36 months in the future. The many-core research projects from a decade ago showed that a balance must be struck between the number of cores and the amount of work those cores are capable of doing. Whether AMD steps up to 256 cores or not will depend in part on how moving from 128 to 256 cores would impact the CPU’s ability to perform useful work in more lightly threaded applications.

As for the 600W figure, that’s not particularly large for a theoretical 256-core CPU. Such a system could wind up saving power by eliminating redundant hardware in multiple chassis that would otherwise be required to provide the same number of cores.

Now Read:

November 1st 2021, 1:17 pm

Apple’s New MacBooks Include a Notch-Hiding Feature by Shrinking the Screen


(Photo: Apple)
Some despise it, some don’t mind it. Apple’s latest 14” and 16” MacBook Pros come with a notch at the top of the screen where the device’s camera lies, similar to that of the iPhone. But users report that since getting their hands on the new MacBooks, certain applications haven’t interacted well with the notched display; rather than considering the notch in their formatting, the applications act as though it isn’t there, thus losing a small chunk of their screen real estate. 

The issue has prompted Apple to push out a temporary workaround, according to a new support document. In an effort to avoid the notched area entirely, the company has provided 2021 MacBook Pro users with a way to display an app entirely below the camera area in lieu of a slim band of screen space. Users can now go into an app’s settings, open the Info window, and select an option called “Scale to fit below built-in camera.” Apple notes that even after this option is selected, all open apps or apps that share the same space will appear below the camera until users quit the app using the scaled setting.

(Photo: Apple)

The workaround is a band-aid fix intended only to be used until more apps interact with the camera notch properly. Prior to the fix, users complained that toolbars were losing their middle area to the notch, or that cursors would awkwardly jolt around the notch as if they don’t know what to do with it. Also, some toolbar options have been accessible under the notch, and some haven’t. Overall, the UI involved in one of Apple’s shiny new MacBook Pro features has been disappointing, to say the least.  

Apple’s temporary solution isn’t the most attractive one either, given that many new MacBook Pro owners were drawn to the device for its larger screen. With thicker bezels like those of the old MacBook Pros, it’s harder to show off that you have the latest model at your local coffee shop or in the office. In that way, the notch is a status symbol, something you want to notice and have others notice—just not in the way users have experienced so far.

Now Read:

October 29th 2021, 11:45 am

Project Pele: Why the DoD is Betting on Tiny Nuclear Reactors to Solve Its Power Woes


In 2019, the government signed  mandating that we develop an itty bitty nuclear reactor by 2027. In compliance with that order, the US Air Force is launching a “microreactor” pilot project at Eielson AFB, in Alaska.

Per usual, the Air Force is playing its cards pretty close to the chest. As of October 27, the Office of Energy Assurance (OEA) hasn’t even announced that they’ve chosen a specific reactor technology. But all evidence suggests that this new installation is part of an energy-resilience effort known as Project Pele. The goal of Project Pele, according to the Dept. of Defense’s Research and Engineering office, is to “design, build, and demonstrate a prototype mobile nuclear reactor within five years.” Three separate development contracts have been awarded, with the final “mature” design submissions TBA.

Project Pele has two main themes: the reactor has to be 1) small, and 2) safe. What we’ve learned from Chernobyl and Fukushima is that failure of the coolant system can have terrible consequences, and in both cases, power failure to the cooling system is what allowed the fuel to become so hot that it entered meltdown. Failure is simply unacceptable. With nuclear power, we also have to consider decay heat and spent fuel disposal. Inability to dispose of hazardous byproducts counts as being unsafe. Even worse, the same stuff we use to make the power can be used to make weapons. But the new Generation IV reactors can further the conversation.

Without getting all breathless, I want to talk about one of the three designs likely being put forth in particular. One of the commercial contractors chosen to submit a design is a domestic outfit called X-Energy, whose higher-ups come from NASA and the US Department of Energy. Its CEO, Jeffrey Sells, previously served as Deputy Secretary of Energy, and founder Kam Ghaffarian operated a NASA service contractor that supported the former Mission Operations Data Systems at Goddard. The X-energy model is a Gen IV high-temperature gas-cooled pebble bed reactor. It uses TRISO fuel pellets or “pebbles” (TRISO stands for TRi-structural ISOtropic particle fuel) loaded into a column that’s then flooded with a heavy, nonreactive gas. And the whole thing is absolutely tiny: X-energy’s website describes their reactors not as building sites, but as modular products, shippable using existing road and rail.

The pebble-bed model used by X-Energy is clearly meant to specifically address many known failure points of nuclear power production. Whether it actually delivers on that promise is yet to be seen, because this is all still in the planning stages, but the design principles are there. First and worst is meltdown, which X-Energy is mitigating via the composition of the fuel itself. The TRISO pebbles are made of granules of uranium oxycarbide the size of poppyseeds, layered with pyrolytic graphite and embedded within a silicon carbide firebreak. The whole thing is the size of a cue ball.

Silicon carbide is what NASA uses in the heat shielding for numerous spacecraft. It’s tough stuff, very strong under pressure, and very difficult to melt. Carbides aren’t melted and cast like regular metals, because their melt points are higher than any other metal. Instead, uranium oxycarbide is created using spark plasma sintering. TRISO pebbles are also passively governed by a negative-feedback mechanism that starves the fuel of neutrons as the temperature rises, independent of any active or mechanical control. Higher temperatures mean falling reaction power, enforced by the nature of the material itself. It’s hard to have a meltdown if your fuel just… won’t melt.

Explosions also present their own set of dangers, including particulate from burning fissile material or graphite shielding. In this design, the reaction is held at temperatures far above the annealing point of graphite. This prevents stray potential energy from neutron bombardment from getting “stuck” in the graphite’s crystal lattice and eventually escaping in an uncontrolled burst, which is what happened in the Windscale fire. Pyrolytic carbon can burn in air if it’s also in the presence of enough water to catalyze the reaction, but there is no water-cooling loop, which prevents a steam explosion.

The use of uranium oxycarbide instead of uranium oxide or carbide is intended to reduce the oxygen stoichiometry; carbides are strong under pressure but not under expansion, so the oxycarbide should produce less gas under decomposition. That means that even if one of the carbide pebbles should rupture, smothered in the heavier-than-air gas, it won’t catch fire. The coolant never leaves the gas phase. The design relies on simply placing a critical mass of fissile material inside a gas-cooled reaction vessel, where it will go critical on its own. They’re just sitting a bunch of angry jawbreakers in the bottom of a tank, where they irritate one another into producing energy. Instead of shutting down to replace fuel rods, in pebble bed reactors, at regular intervals a pebble is collected from the bottom of the container by way of gravity, tested, and recycled to the top of the column.

Look at it. It’s the worst Gobstopper.

Once fully operational, the reactor will produce between one and five megawatts. That’s quite small for any power plant, and even more so for a nuclear plant — nuclear plants are often rated in the hundreds of megawatts or even the gigawatt range. At five megawatts it still barely clears a third of the Eielson base’s gross energy budget. But the micro-reactor isn’t being installed so that it can handle the base’s power consumption. This is a proof of concept, for both a reactor design that fails toward safety, and a portable source of radiant energy that doesn’t require a constant external material supply.

One serious weak spot this reactor could address is the way the armed forces get power in the field. For example, in Iraq and Afghanistan, the military used fuel convoys to truck in diesel to their installations, which ran on diesel generators. But generators are loud, dirty, expensive, and prone to breakdowns. They are also a hazard to human health: fuel-burning generators produce dangerous fumes and super-fine particulate. Furthermore, the convoys themselves were low-hanging fruit for insurgent attacks. All of this requires maintenance and lots of security. Much of the reason Eielson was chosen over any other site comes down to its reliance on fossil fuels that have to be transported in, like coal and diesel. The armed forces have a direct strategic interest in weaning their operations off petroleum fuels, to the extent they can.

What benefits the military, though, often ends up also improving civilian lives. Eielson AFB is only about a hundred miles south of the Arctic Circle. During the heating season, the base can burn 800 tons of coal every day. Like much of Alaska, it is beholden to energy supply lines prone to failure exactly when they’re most needed. Most of the state uses coal or diesel to provide electricity and heating. Much of Alaska is also only accessible by boat or plane. Juneau doesn’t even have a road connecting it to the outside world, because the terrain is so uncooperative. One failure point can easily line up with another. Eielson’s northerly location, along with its inexhaustible need for fuel, make it an excellent sandbox (snowbank?) for field testing the microreactor. Greater Alaska is also keenly interested: According to the Anchorage Daily News, “a cost-effective 1-5 MW power generator that doesn’t require refueling could represent a sea change for rural power in our state, as that range covers the needs of dozens of villages off the road system that currently have some of the most costly power in the state — and which are vulnerable to generator breakdowns in the dead of winter, when the consequences can be life-threatening.”

The issue of waste disposal remains unresolved. Shiny and chrome though these pebbles may be, they still embody about the same radioactivity per kilowatt hour as spent conventional fuel — it’s just spread across a larger volume. While this makes any generated waste hypothetically less awful to handle, there’s more of it, and that complicates the already manifold problems with waste handling and storage.

Final designs are to be chosen in fiscal 2022. From there, the DOD wants a reactor up and running by 2027.

Now Read:


October 29th 2021, 11:16 am

Facebook Teases New High-End ‘Project Cambria’ VR Headset


Facebook’s current Oculus headsets clock in at much lower prices than the headsets of yesteryear, and yet they’re still impressive pieces of hardware. However, the social media giant is working on a headset that isn’t going to compete on price. The company teased the upcoming Project Cambria VR headset at the Connect conference, saying it will be a high-end experience rather than a replacement for the Quest. It won’t launch until 2022, though. 

It’s hard to judge from the teaser, but the new headset does look more svelte than the current Quest 2. A big part of that is apparently thanks to the “pancake” optics. These new lenses work by bouncing light back and forth several times to allow for a more compact form factor. According to Facebook founder and CEO Mark Zuckerberg, the result is a more compact, comfortable headset. The company also announced it was pulling a Google to reorganize under a new parent company called Meta. So technically, Oculus and Facebook are both Meta companies now. 

Project Cambria should also be a much more immersive experience than current VR headsets. According to Zuckerberg, the premium headset will include eye and face tracking, so your virtual avatar will be able to maintain eye contact and change expressions to match your own — it sounds a bit like Memoji on Apple devices. Zuck also hinted at body-tracking, which could let you interact with virtual and augmented reality spaces in a more intuitive way. 

It’s also a safe bet that Project Cambria will include higher resolution displays. It will include high-resolution cameras that can pass full-color video to the headset’s display. This opens the door to augmented reality applications and virtual workspaces that can be overlaid on your boring old desk. 

We haven’t seen the device in the flesh yet, but the silhouette above sure does look like a recently leaked “Oculus Pro.” That headset is alleged to have body-tracking capabilities, augmented reality features, and a controller dock. However, the leaked videos hardly constitute confirmation. Facebook says the Project Cambria headset won’t launch until next year, and a lot can change in the meantime. 

Zuckerberg didn’t talk about pricing during his keynote except to say it would be more than the Quest 2, which starts at $299. That’s a good value for what is arguably the most capable and well-supported VR headset on the market. But how much can Facebook push the price before the burgeoning interest in VR peters out? Probably not as much as Zuck would like.

Now Read:

October 29th 2021, 9:01 am

NASA Wants Your Help Improving Perseverance Rover’s AI


NASA’s Perseverance rover is the most advanced machine ever sent to the red planet with a boatload of cameras and a refined design that should stand the test of time. Still, it’s just a robot, and sometimes human intuition can help a robot smarten up. If you’re interested in helping out, NASA is calling on any interested humans to contribute to the machine learning algorithms that help Perseverance get around. All you need to do is look at some images and label geological features. That’s something most of us can do intuitively, but it’s hard for a machine. 

The project is known as AI4Mars, and it’s a continuation of a project started last year using images from Curiosity. That particular rover arrived on Mars in 2012 and has been making history ever since. NASA used Curiosity as the starting point when designing Perseverance. The new rover has 23 cameras, which capture a ton of visual data from Mars, but the robot has to rely on human operators to interpret most of those images. The rover has enhanced AI to help it avoid obstacles, and it will get even better if you chip in. 

The AI4Mars site lets you choose between Opportunity, Curiosity, and the new Perseverance images. After selecting the kind of images you want to scope out, the site will provide you with several different marker types and explanations of what each one is. For example, the NavCam asks you to ID sand, consolidated soil (where the wheels will get good traction), bedrock, and big rocks. There are examples of all these formations, so it’s a snap to get started. 

With all this labeled data, NASA will be able to better train neural networks to recognize terrain on Mars. Eventually, a rover might be able to trundle around and collect samples without waiting for mission control to precisely plan each and every movement. It’ll also help to identify the most important geological features, saving humans from blindly combing through gigabytes of image data. 

The outcome of Curiosity’s AI4Mars project is an algorithm called SPOC (Soil Property and Object Classification). It’s still in active development, but NASA reports that it can already identify geological features correctly about 98 percent of the time. The labeled images from Perseverance will further improve SPOC, which includes more subtle details including float rocks (“islands” of rocks), nodule-like pebbles, and the apparent texture of bedrock. In some images, almost all of the objects will already be labeled, but others could be comparatively sparse. 

The Curiosity AI project resulted in about half a million labeled images. The team would be happy with 20,000 for Perseverance, but they’ll probably get much more.

Now Read:

October 29th 2021, 5:13 am
Get it on Google Play تحميل تطبيق نبأ للآندرويد مجانا