Dell upgraded this laptop with one of Intel’s new 10th generation Core i7-10510U processors that has four CPU cores that can hit clock speeds as high as 4.9GHz. The system also comes with a fast NVMe SSD storage device and a 1080p display, which makes it well suited for everyday tasks. It can even run some games with low settings thanks to a low-end Nvidia GPU. You can get it now from Dell marked down from $1,570.00 to just $849.00. If you have an Amex card, you can save even more by using promo code STAND4SMALLat checkout to drop the price to $764.09.
This compact desktop is all business with a relatively small physical footprint that’s in no way indicative of its performance. At the heart of this PC is an Intel Core i5-9400 processor with six CPU cores that can turbo boost up to 4.1GHz, which gives it strong performance for running applications. Currently, you can get this system from Dell marked down from $998.57 to just $549.00. If you have an Amex card, you can save even more by using promo code STAND4SMALLat checkout to drop the price to $494.09.
The Roomba 891 robot vacuum was designed to offer powerful suction, which makes it ideal for difficult cleaning tasks such as removing pet hair from carpet. It also connects using Wi-Fi and can be controlled via your smartphone. For a limited time you can get one of these useful devices marked down from $449.99 to $299.99 from Amazon.
Sandisk built this external SSD with a large 500GB capacity and a rugged water-resistant exterior. The drive can transfer data at speeds of up to 550MB/s over USB 3.1, which will far outstrip your typical USB flash drive and external HDD. You can currently buy this SSD marked down from its original retail price of $169.99 to $81.99.
Working on a 4K monitor has some major advantages including being able to fit more on-screen at any given time. This display from Dell utilizes a 27-inch 4K panel that also supports 1.07 billion colors, making it well-suited for image editing. Right now you can get one from Dell marked down from $719.99 to $579.99. If you have an Amex card, you can save even more by using promo code STAND4SMALLat checkout to drop the price to $521.99.
LG G8 ThinQ features a 6.1-inch OLED display with a resolution of 3440×1440. This phone is also one of the fastest on the market with a Qualcomm Snapdragon 855 octa-core processor and 6GB of RAM. Amazon is offering this phone marked down from $849.99 to $399.99.
Note: Terms and conditions apply. See the relevant retail sites for more information.For more great deals, go to our partners at TechBargains.com.
Customer satisfaction is always a prime indicator of whether a product or service is connecting with its intended audience. And by those metrics, Amazon’s Fulfilled By Amazon (FBA) third-party selling system is posting some overwhelmingly satisfying numbers, according to survey results.
In fact, more than 92 percent of sellers surveyed in 2019 said they were happy enough with their results that they would continue working with Amazon again through 2020. That’s a ridiculously high batting average for any company, let alone one with the name recognition and market standing of Amazon.
The coursework begins with a prime introduction to Amazon commerce, The Complete Amazon FBA Masterclass. Even for those who are e-retailing for the first time, the training here breaks down every step of starting, nurturing and expanding an FBA business. From navigating the platform to securing the right products to building a sustaining customer base, this is the step-by-step guide for getting an Amazon FBA business up on its feet.
If you’re looking to start selling, but aren’t quite sure of exactly what you want to sell, you’ll examine some of the top-selling items you might want to consider in the Amazon FBA and eBay: 33 Hot Product Sourcing Strategies course. The training also features some of the behind-the-scenes methods for then finding your store’s inventory without getting gouged by wholesalers.
Next, Launch Your First Private Label Product Using Amazon FBA explains one of the most profitable tactics used by Amazon sellers, creating your own boutique brand line that can keep loyal buyers coming back again and again.
Finally, the Source and Sell on Amazon FBA by Leveraging on Established Listings course comes at Amazon selling from the other direction, identifying how you can spot products and leverage the results of top FBA sellers, all without creating your own Amazon listings.
This 360-degree guidebook to selling on Amazon is normally a nearly $800 value, but right now, it’s only $29.99 with this offer.
Note: Terms and conditions apply. See the relevant retail sites for more information.For more great deals, go to our partners at TechBargains.com.
Microsoft used to release new versions of Windows like clockwork every couple of years, but we’ve been on Windows 10 for quite a while now. The company has instead released free feature updates for the operating system once or twice per year. Although, rolling those updates out can be perilous. Currently, Microsoft’s latest May 2020 update is on hold for most devices as the company works to resolve a raft of issues.
The May 2020 update began hitting computers last week, but almost all the updated devices are from Microsoft’s own Surface lineup. Although, many Surface models are still waiting like those of us with other Windows-powered hardware. Since the initial rollout, Microsoft has discovered 11 major issues (as of this posting), and 10 of them are still under investigation. Some of the more vexing issues include a stop error when connecting or disconnecting a Thunderbolt dock, no mouse input in select games, and the inability to connect more than one Bluetooth device.
If your computer doesn’t have the update already, it’s likely that Microsoft has blocked it while it does more testing. The company even added a prominent warning in Windows Update over the weekend. If you’re on the previous version looking to get the May 2020 Update (Build 2004), Windows Update will remind you that your device “isn’t quite ready for it.”
Devices blocked from getting the update will show this warning in Windows Update.
Microsoft will increase installations over time as it mitigates the issues present in the new update. Most users should hang back and wait for the official rollout. However, if you want to live on the wild side and install a Windows update before Microsoft thinks you’re ready, you can do so with the Windows Update Assistant. This tool will force the latest update regardless of your system status. You should only do this if you’re willing to deal with the potential issues listed on Microsoft’s status page.
Once you get the update, Windows 10 will have a revamped Cortana interface, faster Windows search, and even a new Windows Subsystem for Linux (WSL2) with a real Linux kernel. If you’ve got an Android phone, the enhanced Your Phone app in the new update will support call/text integration, file management, and notifications on the PC.
Back in April, news broke that the major hard drive vendors were all shipping hard drives based on shingled magnetic recording (SMR) technology into the consumer channel rather than conventional hard drives. SMR drives offer significantly less performance than CMR drives in many benchmarks, and none of the companies were being completely honest and transparent about which product lines used SMR drives and which did not.
While all three companies were selling SMR drives to consumers without fully disclosing it, Western Digital was the only company selling them to NAS customers. Seagate and Toshiba both restricted their use of SMR to certain consumer drives.
WD Red is Western Digital’s NAS hard drive brand. These are literally the hard drives that Western Digital tells NAS users to buy, which means you’d expect them to be good at doing the things a NAS is expected to do.
ServeTheHome ran a full suite of benchmarks on the 4TB WD Red NAS WD40EFAX (SMR) versus the 4TB WD Red NAS WD40EFRX (CMR). If you run standard storage benchmarks on the drive, it looks pretty good — slower than the EFRX, but not too bad.
If you actually push the drive with something like a 125GB file copy or a RAIDZ resilver test, prepare to take a vacation while you wait for the rebuild to complete:
Graph and data by ServeTheHome. There’s a non-zero chance of an additional drive failure during the ~10 days it’ll take to resilver the RAIDZ array.
1,009 minutes is 16.8 hours. 13,784 minutes is 9.57 days.
The lawsuit, filed by Hattis Law, alleges that Western Digital shifted to SMR drives to save money with no regard for the performance impact this would have on its customers or the fact that it would render the drives “completely worthless for their intended purpose” (emphasis original).
Hattis Law alleges that SMR drives categorically fail in RAID arrays by reporting excessive timeouts to the NAS device when asked to perform sustain random writes. SMR drives reportedly cannot perform RAID scrubbing. When the WD Red problem first broke, multiple NAS RAID users said they were unable to integrate an SMR drive into a RAID array built with CMR disks due to performance problems and excessive timeouts. The Hattis Law lawsuit aligns with what we saw RAID users reporting at the time.
ServeTheHome’s tests show that the EFAX drive might work in a desktop context, but it has no place in a RAID array. The fact that Western Digital is explicitly advertising the SMR-equipped EFAX family as suitable for this purpose is tantamount to false advertising. It might not matter if you use an EFAX drive as storage for a video camera feed, but these drives clearly have major problems in RAID arrays.
Western Digital has not commented on the situation, but it continues to sell SMR WD Red drives into this space. We recommend staying away from any products in the WD Red line-up with the EFAX model number — use EFRX drives if you’re building a RAID array on these products.
It’s possible that these drives perform adequately in some types of RAID array, but the STH tests demonstrate that there are real-world cases where they very much do not. That’s not being communicated to Western Digital customers. If you’ve bought an SMR WD Red drive and feel you’ve been defrauded, you can register as part of the class action suit here.
Feature image of WD Red HDDs, not the specific models discussed here.
Astronomers discovered our first confirmed interstellar visitor in 2017, naming it ‘Oumuamua, the word for “scout” in the Hawaiian language. Determining what ‘Oumuamua actually was proved a more daunting task. Eventually, astronomers decided ‘Oumuamua was probably a very old comet, but a new analysis suggests it may be a different kind of object altogether — an interstellar hydrogen iceberg.
‘Oumuamua had scientists scratching their heads because everyone had always assumed our first alien visitors would be comets ejected from another solar system’s Oort Cloud. However, ‘Oumuamua didn’t form a coma or vaporized material as it neared the sun. So, an asteroid? Not so fast — more analysis showed ‘Oumuamua’s course was being nudged by outgassing from its surface. So, that supported the idea that ‘Oumuamua was a very old comet that had lost most of its volatile gases.
The analysis from Yale astrophysicists Darryl Seligman and Gregory Laughlin admits that a hydrogen iceberg is a rather exotic object to be flying through the solar system, but it would explain all of ‘Oumuamua’s bizarre properties. Hydrogen usually exists as a gas, but it’s possible to freeze it at very low temperatures (around -450 degrees Fahrenheit). Clumps of frozen hydrogen are believed to exist in the centers of dense molecular clouds where temperatures are near absolute zero. Could one of those hydrogen icebergs have floated our way?
Frigid molecular clouds only last a few hundred thousand years before dissipating. However, over that time a cloud could generate a block of hydrogen ice a few hundred meters across. It just so happens that’s about the same size as ‘Oumuamua, which is less than 800 meters long. This could also explain why ‘Oumuamua is cigar-shaped. After a cloud disperses, cosmic radiation would erode the iceberg as it floats through space. Uneven radiation from nearby objects could cause it to become cigar-shaped like ‘Oumuamua.
‘Oumuamua’s path through the solar system in 2017.
Perhaps the strongest evidence for the hydrogen iceberg theory is that it would explain how ‘Oumuamua changed course after entering the solar system. Comets get a small boost from outgassing but remember: ‘Oumuamua had no visible coma. Its course change could be explained if it was releasing pure hydrogen. That would give it a little nudge, and hydrogen gas would not be visible from here on Earth.
This all hangs together well, but there’s no way to confirm it right now. ‘Oumuamua is moving too fast for us to intercept as it leaves the solar system. Many astronomers believe other interstellar objects pass through the solar system on a regular basis. It’s just a matter of spotting them. If we see another object like ‘Oumuamua, we may have a chance to test the iceberg hypothesis.
There’s a rumor that’s popped up in the past several days concerning AMD’s long-term plans for 7nm and 5nm. According to this rumor, which began with a DigiTimes post now sealed behind a paywall, AMD is now considered a Tier 1 TSMC customer.
Supposedly, this newfound friendship between the two companies will result in AMD launching Zen 3 on 5nm to steal a march on Intel in a further extension of AMD’s overall market leadership. There are several reasons why this is unlikely.
First, there’s a significant lag time between when CPU designs are sent to the foundry for manufacturing (a process called taping out) and when they ship to customers. First, AMD sends the design to TSMC. Then they test the hardware TSMC sends back, and tweak the design as necessary. All of this takes several months, best-case. I don’t know where AMD is in the Zen 3 design process, but 5nm is going to have entirely different design rules than 7nm. There’s no way to quickly port from one to the other. Leaping ahead in this fashion isn’t done because the long lead times make it impossible.
Second, it’s not clear how much advantage 5nm offers to AMD in the first place. TSMC is predicting a 45 percent density advantage, which is great, but only up to 20 percent better power efficiency or 15 percent additional performance. Keep in mind, these are best-case scores, and to some extent, they are either/or.
I don’t want to imply in any way that AMD won’t have a 5nm chip — they’ve already got one on the roadmap — but it’ll have to balance the design carefully to improve performance. At the Zen 2 briefing, AMD’s engineers told us candidly they were surprised they were able to offer any frequency improvements at all at 7nm. This doesn’t bode well for clock scaling at 5nm. The Zen 4 team will be working on that problem already, given AMD’s described design cadence.
Third, AMD also doesn’t typically lead the way on foundry node transitions. Apple and Qualcomm occupy that role these days, and we’d expect the next-generation iPhone and Snapdragon parts to account for much of the 5nm capacity when the node launches.
If you want another example of how hard it is to backport features to a different process node, consider Intel. Skylake launched in 2015. If you believe the rumors, Rocket Lake is a 14nm chip with backported 10nm features launching later this year. It’ll be the first new CPU architecture from Intel in five years.
It didn’t take Intel 5 years to backport 10nm capabilities into a 14nm core, but the company was talking to us journalists about its efforts to make that kind of flexibility possible in 2018. Even if you assume they hadn’t even started the work yet (a poor assumption, in my opinion), it took two years to finish. Moving a CPU architecture between nodes is not a trivial undertaking.
Scientists used to wonder if planets were common throughout the universe, and now we know: they are. Observations with ground-based and space telescopes like Kepler and TESS have proven planets are extremely common. There’s even a small, Earth-like planet right next door orbiting Proxima Centauri. We can say that with confidence now that a team from the University of Geneva has confirmed and refined the initial observations. While Proxima Centauri b is similar in size to Earth, it might not be a great place to vacation.
Scientists discovered Proxima b in 2016, but it took longer to confirm because of how it was detected. Most exoplanet identifications over the past decade come from the Kepler Space Telescope, which used the transit method of detection. When an exoplanet orbits its star, it can block out the star’s light for brief periods. By tracking these dips in brightness, we can infer the properties of the planet. This is a reliable way to spot planets, but it only works when the plane of the other solar system is aligned with ours. That is not the case for Proxima Centauri, the closest star to Earth at just 4.2 light-years distant.
A team from the European Southern Observatory discovered Proxima b with the aid of HARPS (High Accuracy Radial Velocity Planet Searcher), a sophisticated spectrograph at the La Silla Observatory in Chile. A spectrograph can measure the small wobbles in a star’s motion that can indicate the presence of an exoplanet. Now, the University of Geneva team has fired up ESPRESSO, a more powerful spectrograph in the same observatory to confirm Proxima b.
This is what we think it looks like on the surface of Proxima b, a temperate exoplanet orbiting in the habitable zone around a red dwarf in our nearest neighboring star system, Alpha Centauri. Image: ESO
The ESPRESSO data confirms Proxima b is there and that it’s just 1.17 times Earth mass. It also completes a full solar orbit in 11.2 Earth days. Despite being so close to the star, Proxima b is inside the habitable zone because Proxima Centauri is a small, cool red dwarf. With its presence confirmed, the team can also say with certainty that Proxima b gets about 400 times more X-ray radiation than Earth.
Because Proxima b doesn’t transit the star, it’s harder to gather data about its composition. We know it’s only slightly more massive than Earth, so it’s probably a rocky world. However, no one knows if it might have an atmosphere that could protect it from all that radiation. There’s a lot more to learn about Proxima b, but we might need to wait for future instruments like the James Webb Space Telescope to help us get there.
SpaceX has officially made history with the first successful launch of humans into space by a private company. The Crew Dragon capsule, carrying NASA astronauts Robert Behnken and Douglas Hurley, safely reached orbit 12 minutes into the launch and is now on its way to the International Space Station. It was the first crewed launch from American soil in almost nine years following the retirement of the Space Shuttle program. The joint NASA-SpaceX collaboration means the US no longer has to rely on Russian Soyuz launches to move astronauts to and from the International Space Station.
As part of the launch, the first stage of the Falcon 9 Block 5 rocket successfully landed on the company’s drone ship, appropriately named Of Course I Still Love You. At the 12-minute mark, the Dragon capsule separated from the second stage as it reached orbiting altitude.
Today’s launch took place on pad 39A at Kennedy Space Center after an aborted attempt this past Wednesday due to unfavorable weather conditions. Contrary to the usual NASA procedure, SpaceX fueled up the Falcon 9 after the two astronauts had already boarded the Dragon. While the mission is completely automated, both Behnken and Hurley retain the ability to manually control the capsule and will in fact do so as part of the flight, which SpaceX initially didn’t want to do.
NASA-SpaceX Demo-2 successful separation of the second Falcon 9 stage.
The astronauts will perform a variety of tasks aboard the Crew Dragon as part of the demo mission in addition to the manual-control demonstration. Then, at about 10:30 AM Sunday eastern time, Hurley and Behnken will arrive at the International Space Station, where the Crew Dragon will engage its new automated docking system.
After a short stay on the ISS (the exact length of which has not yet been determined), the two astronauts will return to Earth in the Dragon and splash down in the Atlantic Ocean with parachutes. That will be the first time astronauts will have landed this way since 1975, before the days of the Space Shuttle program. While the Crew Dragon capsule does have its own SuperDraco engines, both as backup propulsion and as a launch-abort system, they won’t be used in this mission. Future landings with the engines may well be in the cards, though.
Following a successful SpaceX Demo-2 mission, NASA and SpaceX hope to use the Crew Dragon to send and return astronauts to the ISS on a regular basis.
If you have huge performance needs, or if you just want to build an astonishingly powerful PC, you won’t want to miss this deal. For a limited time, you can get AMD’s Ryzen Threadripper 3990X, which is among the most powerful CPUs on the market today with 64 CPU cores. Right now you can get it with a $540 discount, which helps take the sting off of buying one of these costly yet powerful CPUs.
AMD’s Ryzen Threadripper 3990X is arguably the single most powerful processor on the market today with a whopping 64 CPU cores and an astonishingly high number of threads that sits at 128. The CPU also has an enormous pool of cache that adds up to a total capacity of 292MB. You can also overclock this processor to unleash additional performance, but as it ships with a max operating frequency of 4.3GHz, you really don’t need to. If you want to build a computer with an almost ridiculous amount of processing power, this is what you want to buy, and for a limited time it’s on sale! Right now you can get the Threadripper 3990X from Amazon with a $540 discount that drops the price from $3,990.00 to just $3,449.99.
Apple’s newest smartwatch is the company’s first to feature an always-on display, which remains illuminated and provides on-screen information for the entire time the watch remains on. Like last year’s model, the new Watch Series 5 offers up to 18 hours of battery life on a single charge. For a limited time, you can get one from Amazon marked down from $399.99 to $299.99.
Dell’s Alienware Aurora pairs a fast Intel Core i7-9700 processor with an immensely powerful Nvidia GeForce RTX 2070 Super graphics card. This combination makes the system excellent for gaming — not to mention it also comes with a 1TB HDD that gives you plenty of storage space. This system typically retails for $1,419.99 but right now you can get it for $1,142.99 from Dell with promo code LCS10OFF.
This high-powered robot vacuum has 2,000Pa of suction power and a large 5,200mAh battery that enables it to run for up to 150 minutes on a single charge. The Roborock S5 also supports Wi-Fi and can be controlled using a smartphone app and Alexa voice commands. Right now you can get it from Amazon marked down from $599.99 to $364.79 with promo code ROBOROCKS5.
Apple’s iPhone XS Max is a feature-rich smartphone with a large 6.5-inch OLED display and a powerful A12 Bionic processor. For a limited time, you can get one of these phones with 64GB of internal storage from Woot marked down from $999.99 to just $699.99.
Amazon’s Fire TV Recast is a type of DVR device with 500GB of storage space that can hold up to 75 hours of video. It allows you to record up to four shows simultaneously, and this content can then be played back on a wide range of supported devices. Right now you can get this device from Amazon marked down from $229.99 to $149.99.
Dell upgraded this laptop with one of Intel’s new 10th generation Core i5-10210U processors that has four CPU cores clocked at 1.6GHz. The system also comes with a fast NVMe SSD storage device and a 1080p display, which makes it well suited for everyday tasks. It can even run some games with low settings thanks to a low-end Nvidia GPU. You can get it now from Dell marked down from $1,284.29 to just $699.00.
I don’t normally write up individual CPU sales or deals, but AMD’s Ryzen Threadripper 3990X is currently $540 off its $3,990 base price. That corresponds to a price cut of ~14 percent, which is fairly significant for a chip in this market segment.
I spent quite a bit of time with the 3990X earlier this year, including an early effort to hit a world record overclock courtesy of Mother Nature. For a brief window of time earlier this year, ET held the second-highest Cinebench R20 score, though other overclockers with access to LN2 have since hit higher performance levels. You can grab the chip on Amazon if you’re interested.
Is the 3990X a Good Investment?
The 3990X is the fastest x86 CPU you can buy today for certain workloads, but it doesn’t make much sense in others. Whether the chip is worth purchasing comes down to what you’re doing with it. The Windows 10 scheduler is limited to 64 threads in a processor group by default, which means the applications that scale up to the full potential of the CPU are required to have their own schedulers in order to do so.
If you’re willing to overclock the CPU — and you’ve got a motherboard that can handle the load — there’s some serious additional performance to be gained. Setting even a modest all-core frequency of 3.7GHz yielded real improvements over stock, and an all-core 3.7GHz seems likely, considering our chip was capable of an all-core 4.3GHz.
But since we don’t recommend CPUs based on OC performance and overclocking is never a sure thing, the 3990X is going to remain more of an acquired taste than a dedicated enthusiast part. One area where it’s proven spectacularly useful is for mass video encoding. I’ve been running a great many encode tests as part of the Deep Space 9 Upscale Project (DS9UP). The 3990X is capable of handling 15-20 simultaneous file encodes, where the 10980XE begins to sag well below that number. While no single encode can stretch up to 128 threads, having a great many potential threads to throw at individual workloads has proven advantageous.
But while I’ve found an actual, practical use for a 64-core CPU, it’s a decidedly niche application — most people aren’t trying to run dozens of different encode tests simultaneously for the purposes of upscaling a television show. On the whole, the 3990X still represents a “halo” part for AMD, while the 3970X is the competitive top-end part. In workloads that can’t take full advantage of the 3990X, the 3970X’s faster clocks often deliver higher performance.
The cool thing about the 3990X is that if you do need one, it’s a really nice halo.
Over the past decade, artificial intelligence and machine learning have emerged as major hotbeds of research, driven by advances in GPU computing, software algorithms, and specialized hardware design. New data suggests that at least some of the algorithmic improvements of the past decade may have been smaller than previously thought.
Researchers working to validate long-term improvements in various AI algorithms have found multiple situations where modest updates to old solutions allowed them to match newer approaches that had supposedly superseded them. The team compared 81 different pruning algorithms released over a ten year period and found no clear and unambiguous evidence of improvement over that period of time.
According to David Blalock, a computer science graduate student at MIT who worked on the project, after fifty papers “it became clear it wasn’t obvious what state of the art even was.” Blalock’s advisor, Dr. John Guttag, expressed surprise at the news and told Science, “It’s the old saw, right? If you can’t measure something, it’s hard to make it better.”
Problems like this, incidentally, are exactly why the MLPerf initiative is so important. We need objective tests scientists can use for valid cross-comparison of models and hardware performance.
What the researchers found, specifically, is that in certain cases, older and simpler algorithms were capable of keeping up with newer approaches once the old methods were tweaked to improve their performance. In one case, a comparison of seven neural net-based media recommendation algorithms demonstrated that six of them were worse than older, simpler, non-neural algorithms. A Cornell comparison of image retrieval algorithms found that performance hasn’t budged since 2006 once the old methods were updated:
There are a few things I want to stress here: First, there are a lot of AI gains that haven’t been illusory, like the improvements to AI video upscalers, or noted advances in cameras and computer vision. GPUs are far better at AI calculations than they were in 2009, and the specialized accelerators and AI-specific AVX-512 instructions of 2020 didn’t exist in 2009, either.
But we aren’t talking about whether hardware has gotten bigger or better at executing AI algorithms. We’re talking about the underlying algorithms themselves and how much complexity is useful in an AI model. I’ve actually been learning something about this topic directly; my colleague David Cardinal and I have been working on some AI-related projects in connection to the work I’ve done with the DS9 Upscale Project. Fundamental improvements to algorithms are difficult and many researchers aren’t incentivized to fully test if a new method is actually better than an old one — after all, it looks better if you invent an all-new method of doing something rather than tuning something someone else created.
Of course, it’s not as simple as saying that newer models haven’t contributed anything useful to the field, either. If a researcher discovers optimizations that improve performance on a new model and those optimizations are also found to work for an old model, that doesn’t mean the new model was irrelevant. Building the new model is how those optimizations were discovered in the first place.
The image above is what Gartner refers to as a hype cycle. AI has definitely been subject to one, and given how central the technology is to what we’re seeing from companies like Nvidia, Google, Facebook, Microsoft, and Intel these days, it’s going to be a topic of discussion well into the future. In AI’s case, we’ve seen real breakthroughs on various topics, like teaching computers how to play games effectively, and a whole lot of self-driving vehicle research. Mainstream consumer applications, for now, remain fairly niche.
I wouldn’t read this paper as evidence that AI is nothing but hot air, but I’d definitely take claims about it conquering the universe and replacing us at the top of the food chain with a grain of salt. True advances in the field — at least in terms of the fundamental underlying principles — may be harder to come by than some have hoped.
If you’ve used Windows for longer than an hour or two, chances are that you’ve interacted with Task Manager. The utility has been present in every version of Windows going back to Windows 95, though the version that shipped with that OS was far more primitive and it didn’t open when you hit Ctrl-Alt-Del (that key command opened the “Close Program” dialog instead). Now, the author of the application, Dave Plummer, has published his own guide to using it, including some tips we’ve never seen before.
If Task Manager crashes, you can restart it by hitting Ctrl-Shift-Esc. Windows will first attempt to revive the hung version — if it can’t, it’ll open a new window for you after a maximum of 10 seconds. Task manager also will not fail to load, even on a resource-constrained system. It will load one tab at a time before it fails altogether.
Ctrl-Shift-Esc also launches Task Manager if you can’t get access to either the “Run” command or can’t get Ctrl-Alt-Del to work.
If Task Manager crashes, restarting it with Ctrl-Alt-Shift (Taskmgr.exe) will restart it with all settings reset to factory original defaults. This actually works for every app Plummer has written, though he didn’t include a list.
You can right-click on any process in the “Process” tab and click “Open File Location” if you need to find the physical location of a file.
You can also add columns to the Task Manager if you want to change what it shows you.
Dave Plummer is also the author of Space Cadet Pinball and worked on a number of other aspects of Windows and MS-DOS before that. It’s not often we get to hear from the actual author of a core software application most of use on a regular, if not daily basis — so hopefully you picked up a few new tips.
One general Windows tip that I’ll throw in of my own, because I’ve never had a great place to put this, and it *is* Task Manager-centric. Assume that you have a video game that’s locked and refusing to show you the desktop. Ctrl-Alt-Delete works — it turns the screen blue and gives you the option to launch Task Manager — but you can’t actually see the Task Manager window. It’s buried underneath a frozen game screen.
There’s a solution to this.
When this happens, press Win + Tab. Right-click on the game or screen-grabbing application and choose “Move to.” Shove the app in question on to a different desktop, as shown below:
If this was a desktop-stealing game, sending it to a different desktop would clear my own display and let me use TM to close the app.
This will clear your primary desktop and give you full access to Task Manager. You can now use TM to kill the locked-up game. This can be incredibly useful with grabby 3D titles that won’t show you the desktop, even if they’re technically supposed to support alt-tab behavior.
For years, the Raspberry Pi has been the premier single-board computer for hobbyists. These devices cost as little as $5 and include all the core components of a computer, but there are also more powerful versions. The latest Raspberry Pi 4 has a new model with 8GB of RAM. Combined with its quad-core ARM chip and ample I/O options, the latest Raspberry Pi 4 can take on even more tasks that would have required a PC in the past. It even has a new 64-bit OS to leverage all that RAM.
The Raspberry Pi 4 launched last year in 1GB, 2GB, and 4GB RAM configurations. The rest of the hardware was a solid improvement over past versions with a pair of HDMI outputs, USB 2.0 and USB 3.0 ports, gigabit Ethernet, and a quad-core Cortex-A72 ARM chip clocked at 1.5GHz. There’s even a USB-C port for power. Although, the first board revision had a flaw that prevented many USB cables from supplying power.
There was no 8GB RAM option at launch because there was no 8GB LPDDR4 chip compatible with the circuit board. It took a little work to accommodate the additional RAM on the Raspberry Pi 4. Designers had to remove the old switch-mode power supply from the right side of the board (near the USB-A ports), adding a higher capacity switcher next to the USB-C port on the left.
The official Raspberry Pi Linux build also needed some work. Until now, Debian-based Raspbian OS only came in 32-bit. 32-bit systems can only address about 4GB of RAM, so the 8GB module would go to waste. There are third-party operating systems that will see all that RAM, but the official Raspbian is now available as a 64-bit image. The Raspberry Pi foundation notes this is still an “early beta,” though.
The 8GB Raspberry Pi 4 is available now at various retailers for $75, which is $20 more than the 4GB edition. That extra RAM could be a real boon to several popular Raspberry Pi projects. For example, a media server like Plex or Kodi should be more responsive on the new 8GB version. The same goes for running a Minecraft server, which is bundled with Raspbian. The Raspberry Pi is all about creativity, and you can get more creative with 8GB of RAM than 4GB.
AMD’s Ryzen 7 3700X comes with eight SMT enabled CPU cores with a max clock speed of 4.4GHz. This gives you exceptional performance for multitasking and running power-hungry applications. Currently, you can get it from Amazon marked down from $329.99 to $274.99.
This little M.2 SSD has a capacity of 250GB and it can transfer data at a rate of up to 3,100Mbps. This makes it significantly faster than a 2.5-inch SSD, and it’s also fairly inexpensive, marked down at Amazon from $79.99 to $54.99.
Equipped with an overclockable Intel Core i7 processor and an Nvidia GeForce RTX 2070 Super graphics card, this desktop has enormous gaming potential. It should be able to run games at 2K resolutions with relative ease. Get one today from Dell marked down from $2,149.99 to $1,449.99 with promo code 50OFF699.
Asus designed this motherboard to fit inside of a mini-ITX case, making it a suitable solution for building a compact SFF PC. It also features a high-end Realtek ALC 1220 audio codec and a built-in 802.11ac Wi-Fi NIC. Currently, this board can be picked up from Amazon marked down from $159.99 to $137.59.
Sandisk built this external SSD with a large 500GB capacity and a rugged water-resistant exterior. The drive can transfer data at speeds of up to 550MB/s over USB 3.1, which will far outstrip your typical USB flash drive and external HDD. You can currently buy this SSD marked down from its original retail price of $169.99 to $81.77.
Dell’s Vostro computers were designed as office and business solutions, and this Vostro 5000 is no different. It’s equipped with an Intel Core i5-9400 processor that offers mid-level processing performance that’s perfect for a wide range of office and work tasks. Dell is offering these systems for a limited time marked down from $998.57 to $499.00 with promo code SUMMER499.
In addition to being relatively small, this motherboard was designed to support overclocking. This means you can push up the clock speed on Intel’s unlocked CPUs to unleash additional performance. The board also features a pair of Gigabit NICs and integrated 802.11ac Wi-Fi. For a limited time, you can get one from Amazon marked down from $189.99 to $138.23.
Note: Terms and conditions apply. See the relevant retail sites for more information.For more great deals, go to our partners at TechBargains.com.
The last few months have seen a surge in conspiracy theories that tie 5G (a cellular networking standard) to coronavirus (a viral pandemic). It was inevitable that we’d see scam products designed to prey on people as a result, and lo’ and behold, the “5G BioShield USB key” appears to be having a moment.
The 5G BioShield USB Key contains “Proprietary holographic nano-layer catalyst technology” that provides “remediation from all harmful radiation, electro-smog, and biohazard pollution.” It’s currently available for £283 or £795 if you buy three, and features “quantum biological shielding technology.” The feature image above, pulled from the website, makes masterful use of contrasting visual elements to tell a simple story: Smoking USB stick + St. George and the Dragon + Lion = BioShield. Right.
You can tell this is going to be a fun trip just based on what the company claims to be capable of protecting you from. “All harmful radiation” would include high-energy gamma-ray bursts, which are capable of emitting as much energy as the Sun will emit in its entire lifetime over a handful of seconds. Next, we have “electro-smog,” a nonexistent phenomenon. So far, the BioShield can protect you from a nearby star collapsing into a black hole or the EM equivalent of Santa Claus. Finally, we’ve got “biohazard pollution.”
This is an astonishingly broad claim that the FDA really ought to be looking into. A “biohazard” is defined as “a biological substance that poses a threat to the health of living organisms, primarily humans.” This includes everything from bacteria and viruses to Nickelback.
How Does it Work?
I’m so glad you asked. The 5G BioShield USB Key works through quantum oscillations to reharmonize disturbing frequencies created by electric fog. It restores the coherence of atoms by geometrically optimizing them for the induction of life force. It does this by magnetically inducing spin in your energy field and emits a number of life-force frequencies to generally revitalize the body.
One of the above claims was made up by me. The other three are from the company’s website. Can you tell which one it is?
There are a lot of things people spend money on that I don’t personally find appealing. That’s fine. I once had a friend who made $200 by being willing to vacuum a dude’s house in a 1950s housedress and heels for two hours. Not the sort of thing I’d go for, but you do you.
(If I’m being honest, I was a bit jealous. I could’ve used $200.)
But devices like this are flagrant scams. They mislead people into believing they have acquired meaningful medical protection in exchange for garbage worth pennies on the dollar. No matter how frustrated I get with people who believe in conspiracy theories like a link between coronavirus and 5G, there’s a yet-worse group: The people who damn well know better and prey on the elderly, non-technical, and afraid merely because they can.
No matter how stupid theories about 5G and coronavirus sound to those of us who understand the technical facts of both situations, it isn’t crazy to doubt if authorities — governments, corporations, or both — have been completely honest about the safety and efficacy of the products, solutions, or standards they peddle.
The United States government has, within living memory, tested biological agents on its citizens and used black citizens as guinea pigs in long-running medical experiments to observe the progression of syphilis without disclosing to these individuals that their condition was curable. A 1976 flu vaccine campaign in the United States (as a response to a feared flu pandemic) caused ~450 people to develop Guillan-Barré, a potentially life-threatening autoimmune disorder in which the body attacks its own peripheral nervous system. People who remember these things aren’t crazy to have concerns. It’s precisely because they aren’t crazy that these issues deserve to be taken seriously and those concerns answered.
Finding out you’ve been taken in by a bad (or bad-faith) argument never feels good. But the one thing worse than being duped by a conspiracy theory is being someone who knows or ought to know that the theory is absolute claptrap and deliberately spreading it anyway. Selling this kind of snake oil during a global pandemic isn’t harmless.
5G does not cause coronavirus. 5G waves are not something you need protection from. If anything, mmWave 5G needs protection from you, since you’re a much bigger threat to its ability to propagate than its pathetic penetration characteristics are to your epidermis. If you are afraid of these things, please don’t be.
Moving to a new console generation has often meant leaving some of your favorite games behind, but Microsoft says that won’t be the case with the Xbox Series X. The team developing this next-gen console is testing older games to ensure they don’t just work but work better. Not only will older titles be faster and smoother, but Microsoft is also working on a host of technologies that can add new features to the games of yesteryear.
While Microsoft’s primary focus is on making new games that could never have run on older consoles, the Xbox Series X team has also spent more than 100,000 hours playing old titles on the console. Microsoft’s Director of Program Management Jason Ronald says thousands of classic games are already fully playable on the Xbox Series X, and the goal is to have 200,000 hours of play testing on the books by the time the console launches in late 2020.
Backward compatibility isn’t as simple as making sure you’ve got the right kind of disc drive. The system and chip architectures change in significant ways across console generations. Developers also optimize console games for a very specific set of capabilities and specs. A new console platform needs to know how to “talk” to older games. Microsoft built the tools for backward compatibility into the core of the Xbox Series X with its custom CPU and a new operating system hypervisor.
According to Ronald, classic games will load faster than they did on their original consoles. Plus, more demanding games that struggled to maintain a smooth frame rate on base-model hardware like the Xbox One S will perform unencumbered on the Series X.
Microsoft isn’t just going to lean on faster hardware and call it a day, though. The Xbox Advanced Technology Group has developed several new features that will enhance the classic game experience. For example, an HDR reconstruction technique will automatically add HDR support to legacy titles. It’ll even work on games from the original Xbox that were developed almost 20 years ago. Microsoft has touted the upcoming console’s “Quick Resume” feature, which allows players to quickly return to their game from a suspended save state. That feature will work with backward-compatible games with no changes to the games themselves.
The Xbox Series X will even be able to render older games at resolutions up to 4K. The team is also working on technologies that will allow doubling the frame rates on older games. Again, this all happens on the platform side — game developers don’t need to change anything.
Microsoft aims to have the Xbox Series X ready for sale this holiday season. It has not announced pricing yet, but rumors suggest it’s waiting for Sony to announce the PS5 price so it can undercut it with the Series X.
Following a surprisingly strong reception among gamers for its first-generation Reverb VR head-mounted display (HMD), HP has deepened its relationship with Valve as part of launching its newly announced Reverb G2 ($599). Using a keynote slot at Augmented World Expo 2020, the two companies, along with Microsoft, outlined their partnership around the new model Reverb and full SteamVR support.
HP’s Reverb G2 By the Numbers
The headline number for the G2 is unchanged from the G1 — 2K x 2K resolution per eye. That keeps it among the highest-resolution end-user devices out there but doesn’t break new ground. However, the G2 does have a newly designed LCD that HP says provides a much better image with improved contrast and clarity. The new panels also run at 90fps, making them a good fit for gaming. In addition, the G2 features brand-new optics designed by Valve and an inter-pupil distance (IPD) adjustment with a range of 60-68 mm. For me, this is a must-have for any decent HMD, so it is good to see HP adding it.
One of the biggest hardware changes is the addition of two side-mounted tracking cameras, bringing the total to four. HP claims the difference in tracking accuracy is obvious and substantial. The G2 also borrows Valve’s audio technology, with speakers and spatial audio similar to the Valve Index. Dual microphones are unchanged from the first generation Reverb. New controllers also sport a more traditional game controller layout — similar to those used by Oculus, for example. The grip buttons also have an analog readout now, for additional flexibility in application design.
Ergonomics Also Get an Upgrade
For starters, there is now a single-barrel 6-meter cable connecting the G2 to a computer. If you have a USB-C port that provides at least 6 watts of power, you can simply plug in the G2. If you have a lower-power port, you’ll need to use the included splitter and AC power adapter. You’ll also have to use something like that if you only have traditional USB 3 ports. The redesigned headband also has a flip-up feature, so you can quickly put the HMD to your face without having to fiddle with the headband. Video is DisplayPort 1.3, and the G2 ships with a DisplayPort-to-mini-DisplayPort adapter.
Gamers: HP and Valve Now Have Your Back
While the G2 is still a Windows MR device, HP has worked closely with both Microsoft and Valve to support SteamVR games and apps. So by launch, the plan is that you should be able to take full advantage of SteamVR content.
Buying a Reverb G2
If you live in the US, you can now pre-order a Reverb G2 for $599. You get the HMD, two of the new-design controllers, and cables. For backpack use, a shorter cable is available, as are replaceable face protectors. Other geographies will follow, of course. Unfortunately, don’t expect to actually get your unit until it is scheduled to ship in the fall. You can purchase the new controllers separately if you’d like to use them with an existing Reverb.
Microsoft has debuted the Windows 10 May 2020 update, also known as Windows 10 2004. The update was delayed somewhat thanks to the impact of coronavirus, and certain aspects of it are still in the pipeline, but the bulk of its promised features are baked-in and ready for prime time.
New features baked into Windows 10 2004 include reduced installation times for future updates. Microsoft argues it has reduced update times from over 80 minutes in 1703 to 16 minutes in 2004, with only a single reboot required for many users. This is an improvement I’ve noticed but didn’t have hard data on — updates definitely install more quickly now than they did when Windows 10 was newer.
Cortana has been enhanced with a chat-based UI that supposedly allows her to respond more effectively to typed or natural language queries. Cortana’s overall portfolio has shrunk in the past year, so Microsoft appears to be focusing on making her better at a smaller group of tasks. Also, you must now be logged into a Microsoft account, work account, or school account to use Cortana, which makes local accounts far more attractive. You can now adjust the position of the Cortana window in Windows 10 2004 as well.
Microsoft has made a number of accessibility improvements to Windows 10, including new options to customize your text cursor, and a new option for Magnifier to read text aloud. Microsoft has also redesigned Narrator to improve its efficiency and will use audio to demarcate when the software has begun or finished scanning text rather than having it say “Scan on” or “Scan off.” Microsoft has also improved support for announcing capital letters and words.
Narrator actually got a number of significant changes to how it handles web browsing and email and now supports Firefox as well as rich text in both Chrome and Firefox. Overall, these sound like a significant set of improvements for low-vision individuals. Eye controls have also been improved, with a new “Switch” function for clicking buttons and a “Dwell” function for selecting them.
There are new improvements to the Windows for Linux subsystem, including support for ELF64 Linux binaries and ARM64 device support. There are new improvements to Bluetooth pairing and various task manager enhancements. GPU temperature is now reported in Task Manager under the performance tab. Any chance we could get that for CPUs, too?
Windows Search should now be more efficient and it understands near-miss typos now, like “Exce” or “Powerpiont”, where the OS previously returned no data.
DirectX 12 Ultimate and… Notepad?
DirectX 12 Ultimate is more of a minor update to DX12 than a major new version release. Essentially, DX12U cleans up the standard, ensuring that there’s a common set of capabilities that define the DX12 standard. As optional features like DXR (DirectX Raytracing) were implemented, different GPUs with very different feature sets could all claim to be “DirectX 12” capable, making it more difficult for customers to know what kind of DX12 card they needed to buy.
We covered the major features of DX12U in this story, but this is a feature level that will be met by Nvidia’s Turing and Ampere, AMD’s upcoming RDNA2, and Intel’s Xe architecture. Keep in mind, DX12U games will run on DX12 GPUs. Those of you who remember the distinction between DirectX 10.0 and 10.1 may remember a similar issue between AMD and Nvidia over that standard. Both AMD and Nvidia GPUs could play DX10 games — AMD just had a few extra bells and whistles that Nvidia lacked. This is an analogous situation. DirectX Raytracing 1.1 and variable rate shading are the biggest new features in DX12U.
As for Notepad, Microsoft had to publish a separate document to list all of the improvements. One massive improvement? Wrap around Find/Replace. Text zooming is also now available, and line and column numbers now display when Word Wrap is enabled. You’ve also got the option to send feedback about Notepad directly from Notepad, which seems like kind of a nice thing to do given that Microsoft just updated an application nobody was sure it remembered existed.
These aren’t all the changes in Windows 10 2004 — you can grab the IT professional-specific list here and a consumer-oriented version here — but it’s the major highlights. There aren’t a lot of attention-grabbing improvements, but there are a number of useful gains. The accessibility changes are particularly welcome, as is Microsoft’s overall focus on this area of Windows development.
If you want the Windows 10 2004 update, you’ll have to grab it via Windows Update for now. Microsoft has more details on its blog.
We’ve known for several decades that the dinosaurs were most likely wiped out by a meteor impact, but ongoing research continues to discover new nuances to the overall situation. New data published in Nature Communications suggests that the dinosaur-killer hit at a steep and somewhat uncommon angle — and that the consequences for life on Earth were significant.
Most reports and discussions of Chicxulub assume that the asteroid struck at a 90-degree angle. While an easy simplification, this is likely untrue; only one in 15 meteor impacts is steeper than 75 degrees, and only 25 percent occur between 60 degrees and vertical, according to the paper. Furthermore, when an asteroid strikes at 90 degrees, three distinctive features — the mantle uplift center, peak ring center, and crater center — are all on top of one another. That’s not the case at Chicxulub. Instead, these features are staggered off-center, with the peak ring center and the mantle uplift center on opposite sides of the crater center. This indicates the impact angle was something other than 90 degrees.
The crater center is the center of the area the asteroid or comet excavated, the peak ring center is the center of the inner ring of displaced rock that forms in this type of complex crater (as shown in Lowell crater below), and the point of maximum mantle uplift is the spot where the mantle rose highest under the crust in response to the impact. After a hit like the Chicxulub impactor, the Earth would have rung like a bell for days, seismologically speaking.
Lowell Crater, Mars, with peak ring visible. Image from NASA via Wikipedia
The researchers modeled a variety of impact angles and speeds to determine what the most likely criteria for the impactor were. What they found strongly suggests that the asteroid or comet approached at a 60-degree angle, based on the remains of the crater and how the debris was distributed. The images below show the trajectory of a 60-degree impact versus a 30-degree impact.
At low impact angles, the center of the mantle uplift and the center of the simulated peak ring are both shifted downrange. When the impact occurs at a high angle, the mantle uplift offsets uprange, while the peak impact ring offsets downrange. The degree of offset depends on the impact angle, and 60 degrees matches the offsets we see at Chicxulub.
The “worst-case scenario” comes into play because of what the asteroid hit. The rocks underneath the Chicxulub impact site were rich in hydrocarbons, sulfur, and CO2, in part thanks to huge organic deposits left over from living things. The 60-degree impact, according to the researchers, released 2-3x more sulfur and CO2 than a 90-degree impact would have, and 10x more than a very shallow (15-degree) impact would have.
In short, we may exist today because the dinosaurs didn’t just get hit by an asteroid — they got hit by an asteroid in the worst possible way. Had the asteroid arrived moments later, or at a slightly different angle, the last 66 million years of history on Planet Earth might have gone down a very different path.
Upgrade your PC with an ultra-fast Adata XPG M.2 SSD that can transfer data at a blazing speed of 3,500MB/s. Today, you can pick one of these drives up with a 1TB capacity for the low price of just $109.99.
Adata’s XPG SX8100 is a fast NVMe storage solution that can read data at speeds of up to 3,500MB/s and write data at 3,000MB/s. It also can hold a lot of data with a total capacity of 1TB. Using a $10 clickable coupon, you can buy one today from Amazon marked down from $119.99 to $109.99.
This well-priced laptop comes equipped with a fast AMD quad-core processor, a 1080p display, and a half-terabyte NVMe SSD all for the low price of $699.99. Just use promo code 50OFF699at checkout at Dell.com to drop the price down from $949.99.
We’re living in dangerous times, but you can help to keep your home safe with a home video surveillance device like this one from Eufy. Eufy’s Wi-Fi Video Doorbell features an HD camera with a resolution of 2,560×1,920. It also has a built-in microphone and speaker, letting you chat with anyone that comes to your door even when you’re not home. For a limited time, you can get one of these devices with a free wireless chime from Amazon marked down from $135.99 to $99.99.
Engineered to utilize the new 802.11ax Wi-Fi standard, Netgear’s RAX15 Wi-Fi router can transmit data at a rate of 1.8Gbps over an area of up to 1,500 sq ft. It also has a built-in USB port for adding network resources such as a printer or USB storage. Currently Amazon is selling these routers marked down from $149.99 to $99.99.
Alienware designed this headset so that it can be used wirelessly over 2.4GHz Wi-Fi or over a 3.5mm jack when the battery is low. The headset also has RGB LED lights built into the ear cuffs that give the headset some extra flare. Right now you can get one of these headsets from Dell marked down from $229.99 to $159.99.
Apple’s new AirPods Pro utilizes a new design that’s different from the company’s older AirPod earphones. The key new feature that these earphones have is active noise cancellation. Each earphone also uses a custom driver and a high dynamic range amplifier to improve sound quality. You can snag them with a $22 discount from Amazon that drops the price from $249.99 to $227.99.
Note: Terms and conditions apply. See the relevant retail sites for more information.For more great deals, go to our partners at TechBargains.com.
Learning to code doesn’t have to be the arduous process that many uninitiated assume it to be. While grasping all the intricacies of programming certainly aren’t simple, some learning methods approach education through gamification. By turning important lessons and critical knowledge acquisition into quick, achievable tasks that resemble a game level, even coding starts to feel a whole lot less scary.
Right off the bat, even students new to the concept of coding will see that the Learnable method is very user-friendly and geared toward getting you familiar with the core tenets of programming fast.
When you log into Learnable via desktop on on iOS or Android devices, this e-learning platform makes it easy to dive into some of the more basic of coding disciplines like Java, PHP, and SQL all the way up to more involved training in C#, C++, Python, Swift and more.
No matter which skill you choose, Learnable will lead you from baseline skills up to practical application with rewards along the way. As you accrue progress points, advance levels and earn specific course badges, you’ll probably feel like you do when you beat Candy Crush…but instead, you’ll actually be earning a new and extremely important 21st century skill set.
Learnable’s Study Planner can help keep your education on track, allowing you to better organize your lessons when you’re ready for a job interview or just reacquaint yourself with something you may have forgotten.
And since learning something new isn’t always a picnic, you can turn it all off once in a while and let Learnable’s built-in Meditation app calmly recenter your thoughts and offer some peace before you dive back in.
For as long as the iPhone has existed, there have been people trying to “jailbreak” it to bypass Apple’s draconian restrictions. The cat-and-mouse game between Apple and hackers has continued over the last decade, but Apple has usually come out on top. For the first time in years, a hacking group has released a universal jailbreak for iOS. The “unc0ver” tool is a bit harder to install than the jailbreaks of old, but it works on almost all modern iDevices.
When we talk about jailbreaks, what we’re really seeing is the manifestation of a security vulnerability. Apple carefully controls what apps and services are allowed to do on its devices by controlling app distribution. If an app doesn’t follow Apple’s rules, it doesn’t make it into the App Store. Jailbreaking an iPhone allows you to install more powerful apps with access to the system. Users like that, but Apple doesn’t because, again, this is technically a security hole.
Jailbreaks used to be plentiful and easy to install — there were even a few that worked wimpy by going to a webpage. As Apple has tightened security, jailbreaks have been less available. Unc0ver is the first universal jailbreak since the release of iOS 10 in 2017. Unc0ver works on all iOS-powered devices running iOS 11 through the latest 13.5 builds.
The official unc0ver website has instructions on installation, which requires connecting the device to a computer. Depending on your desktop operating system, you can use tools like AltStore, Apple’s Xcode, or Cydia Impactor to install the .unc0ver .ipa file. Since you need physical access to a device, there’s no risk of running into malware leveraging this vulnerability. However, it does make your phone less secure. Anyone who gets their hands on it can access your data with the help of unc0ver.
The vulnerability at the heart of unc0ver is in the iOS kernel, a component that connects the hardware to the software. This is a so-called zero-day flaw, which means Apple has no previous knowledge of it, and no patches exist. You can bet that Apple’s developers are pulling apart the unc0ver installer to figure out how it works, though.
If you’re interested in jailbreaking your phone, you might want to get on it. The team behind unc0ver expects Apple will have a patch in a matter of weeks. Anyone who is already jailbroken at that point will be able to stay on the older software builds if they want to keep using the jailbreak features.
Tesla has cut its prices on multiple models, with price cuts of up to $5,000 on specific vehicles. The cuts may be an attempt to stimulate demand after the pandemic, but Tesla hasn’t announced an official rationale.
First up, the Standard Range Plus variant of the Model 3 has picked up a $2,000 price cut, dropping from $39,990 to $37,990. The Model S Long Range Plus is now $74,990, a $5,000 reduction from its previous price of $79,990. Both vehicles dropped by ~5-6 percent, so the degree of reduction is equivalent on both cars.
Click to enlarge.
Similarly, the Model S Performance is now $94,990, and the Model X Long Range Plus is down to $79,990. The only product line that doesn’t seem to have been impacted by price cuts is the Model Y, but that vehicle only recently launched, and cutting its price by $5,000 now would be a slap in the face to people who’ve just recently purchased the car.
The automotive market is currently in bad shape. A report from Meticulous Research suggests that the Covid-19 pandemic could knock 12-15 percent off the global automotive industry in 2020. Industry tracker ALG believes May 2020 vehicle sales will be 21 percent below May 2019. Include the impact of reduced fleet sales, and the decline is larger, down an estimated 32 percent from last year.
So, should we all expect great deals? Unclear. Hertz’s recent bankruptcy could flood the market with used vehicles because the company has already stated it intends to begin some fleet liquidation as part of its Chapter 11 proceedings. If buyers head for used vehicles instead of new ones, we could see more manufacturers offering aggressive discounts to move new vehicles.
As for Tesla, specifically, opinions are divided. Some investors think this is a sign of improved profitability at Tesla thanks to larger economies of scale and that the company has the room to cut prices and attempt to stimulate demand. Those who are more bearish on Tesla see the move as intended to ward off a potential demand cliff. I’m scarcely an automotive analyst, but judging by the reports coming out of the industry, you don’t have to be to see that manufacturers are spooked by the idea of a long-term decline in car-buying thanks to COVID-19. That’s the kind of cliff that every manufacturer could fall off, not just Tesla. If car sales don’t pick up in the near future as the economy reopens, auto manufacturers may face serious problems in the months ahead.
ARM announced a pair of new CPU core designs on Tuesday and launched a significant new strategy for competing in the market in the process. The Cortex-A78 is a new high-efficiency core that emphasizes that aspect of design over performance. It’s a step-wise improvement for ARM over the Cortex-A77, and it’ll undoubtedly show up in plenty of designs next year either as the high-end core in a midrange or upper-midrange design, or the midrange core in a three-tier big.Little.littlest design.
The ARM Cortex-X1, on the other hand, is something genuinely new and exciting. Up until now, there’ve effectively been two players in the ARM CPU market: Apple and everyone else. Apple has driven single-threaded ARM performance far above anything any other company has delivered, and it is the only company to offer an ARM SoC that could plausibly challenge the likes of AMD or Intel at the top of the performance stack (in single-threaded performance).
This slide shows the high-level differences between the two. The X1 doubles SIMD throughput, can dispatch 5 instructions or 8 Mops per cycle, and offers up to 1MB of L2 and 8MB of L3.
Dispatch bandwidth has been increased by 33 percent, with a larger out-of-order window (224 entries, up from 160 to help ARM extract better ILP). Integer pipelines appear identical to Cortex-A78, but the FPU resources have increased with 2x the SIMD pipelines for NEON support. ARM continues to support 128-bit vector registries with no 256-bit or higher capability, but doubling up the 128-bit units does partially compensate for that.
Cache bandwidth is substantially higher, with doubled available bandwidth to both L1 and L2, as well as the already-mentioned doubling of L2 capacity. The L2 has been redesigned to improve its access latency and offers 10-cycle latency compared to 11 cycles on the Neoverse-N1. The L2 TLB is also 66 percent larger.
Two Chips to Rule Them All
ARM is dividing the Cortex-A78 and the Cortex-X1 to allow the two families to play in somewhat different markets. The X1 is the performance-at-all-costs CPU core that’s unlikely to show up in clusters of 4-8 chips but could serve as the basis for a server play or a much higher performance ARM PC than anything we’ve seen to date. If you were serious about building an ARM-based Windows PC that could keep up with Intel or AMD, the X1 would be the easy choice — while it may not be as power-efficient as the A78, ARM needs to throw more silicon at x86 emulation to squeeze out better performance in the first place.
Overall, ARM is moving into position to challenge x86 more directly. I wouldn’t start drawing up title cards for an x86 versus ARM battle just yet — the long-foretold fight between the two architectures appeared poised to begin in the mid-2010s, just before Intel quit the tablet market. ARM hasn’t exactly muscled into the desktop, mobile, and server markets yet, and until it does, we can’t exactly declare that the two spaces have come to blows. Both AMD and Intel, however, ought to be looking nervously over their shoulders. They’ve got some potential competition on the horizon.
According to a recent Yahoo News/YouGov poll, 44 percent of Republicans believe that Bill Gates is plotting to use a COVID-19 vaccine campaign as cover for a mass microchip injection campaign. The survey, conducted May 20-21, showed substantial deviations between Democrats, Republicans, and independents on a host of issues. Among them: A significant gap in the belief that Bill Gates is attempting to use the coronavirus to inject Americans with tracking chips. Forty-four percent of Republicans believe this, compared with 19 percent of Democrats and 24 percent of independents.
Why This Isn’t Technically Possible
Before I talk about the conspiracy theory, I want to address the technical aspect of the question. Let’s forget the Bill Gates angle for a moment. Could an injectable microchip be used to provide tracking in the manner contemplated by this theory?
Anything injected into the body has to be incredibly tiny in order to pass through your blood vessels without causing an embolism. Tiny objects cannot carry much in the way of batteries and have very limited lifespans even in the best of cases. Even assuming we could build an injectable microchip, we have no way to keep them powered for any length of time.
Similarly, there’s no way the microchips would be able to transmit information independently. The human body is not an ideal environment for data transfer, and a tiny microchip tracker wouldn’t have the power to drive a radio. There are pilot projects for injectable robots and wireless power delivery, but not a single system capable of delivering the kind of technological breakthrough required to implement an injectable chip-based tracker.
The truth is, it would be far easier for governments to require Google and Apple to install mandatory tracking apps suited to their specific nations than to develop injectable microchips that can track everyone for the purposes of enforcing coronavirus quarantine (or whatever other nefarious idea was dreamed up).
Coronavirus, Partisanship, and Belief
As the pandemic has progressed, Democratic and Republican views of it have diverged. There are various explanations for this, including the fact that the worst outbreaks have been in blue states. Anecdotal evidence strongly indicates quarantine has been observed differently in different places; where I live in New York State mask-compliance has been near 100 percent. My friends in other states indicate this is very much not the case.
Self-identified Republicans believe many more factually incorrect things about coronavirus than Democrats or independents. There is no credible evidence that coronavirus was engineered in a lab, the US has conducted far fewer tests than the rest of the world (and far fewer than it should have), Sweden’s death rates have been much higher than Norway or Denmark, and there is no evidence COVID-19 is a bioweapon (also, it’s a terrible bioweapon).
Not all the incorrect beliefs are on the Republican side. Democrats mistakenly believe that coronavirus cases have surged in red states when the actual growth has been much slower and the overall situation is still murky. 73 percent of Democrats also believe President Trump called the virus a hoax; this is incorrect. Apparent evidence that he had done so was demonstrated to be a fake video edited in a misleading manner. (This data point is not shown in the graph above.)
The reasons for the beliefs above can be tied to the inaccurate and false statements often made by the President and uncritically repeated by various media outlets, but the “Bill Gates wants to weaponize coronavirus to track everyone” is a decided outlier. President Trump has never mentioned it. Fox News hasn’t pushed it. Furthermore, the other two conspiracy theories on this list — 5G and GMOs, respectively — score much lower with all Americans. Why do Bill Gates and the supposed coronavirus link stand out?
The idea of injectable chips, specifically, plays on common fears in American conspiracy theories related to the New World Order, black helicopters, and the Mark of the Beast. There has always been a significant streak of Christian eschatology in the beliefs of the survivalist and militia movements of the 1980s and 1990s that shaped the conspiracy theory fringe of what would become the Tea Party circa 2010.
The practical explanation for the conspiracy theory is that some people have dramatically misrepresented research Gates funded into the idea of passively tracking vaccine deliveries by using nano-imprinted quantum dots that could later be read by smartphone scanner.
Nothing about the idea had anything to do with tracking. The point was to create a record that a patient had been immunized that wouldn’t depend on the often-poor record-keeping in developing nations. The invisible quantum dot pattern concept doesn’t transmit information and hasn’t been commercialized or productized. The fact that he funded it, combined with his continued interest in digital identity concepts (even though he wants these identities to empower end-users far more than the status quo) has been used to stoke the flames of the conspiracy theory.
But why do people believe an idea like this in the first place, and why this conspiracy, specifically? As to the second, I’d wager it’s because Bill Gates is a known figure in the US, he’s associated with technology (which people increasingly distrust), he was critical of the United States’ early response to the coronavirus pandemic, and he represents a single powerful figure who has been powerful for much of the lives of many Americans. He’s a singular target and focus point for a lot of uncertainty and anxiety right now.
The government doesn’t really need to inject you with a microchip when it can mandate the deployment of a smartphone app instead. Image by XKCD
But part of the reason I think the Bill Gates/microchip vaccine theory has caught on is that it also plays on a particular type of political argument that Americans respond to, called the American Jeremiad.
The American Jeremiad
Writing in 1978, Sacvan Bercovitch described how Americans updated the ancient lamentations of the prophet Jeremiah to suit our own political vernacular:
American writers have tended to see themselves as outcasts and isolates, prophets crying in the wilderness. So they have been, as a rule: American Jeremiahs, simultaneously lamenting a declension and celebrating a national dream.
The American Jeremiad is a type of sermon or speech (the technique is widely used in secular speechmaking today, as well as in religious contexts) in which the speaker outlines a standard or principle of public life that we ought to uphold, details the ways in which Americans have fallen away from or failed to practice that standard, and then expresses a belief that by returning to these ideals and practices we can capture or create a better life for all of us.
The Gettysburg address is perhaps the most perfect example of an American Jeremiad ever written. It begins with a recognition that our forefathers “brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal.” It then acknowledges that the American people have fallen from this position: “We are engaged in a great civil war, testing whether that nation or any nation so conceived and so dedicated, can long endure.”
How does it end?
“[T]hat this nation, under God, shall have a new birth of freedom—and that government of the people, by the people, for the people, shall not perish from the earth.”
100 percent American Jeremiad. Accept no substitutes.
Martin Luther King Jr’s “I Have a Dream” speech is an equally powerful but significantly longer demonstration of the same art form. You can trace them back to John Winthrop’s “A Model of Christian Charity” speech given in 1630 aboard the Arabella before the Puritans had even reached the New Land. If the Gettysburg address speaks to you, hug a Pilgrim.
The idea that Bill Gates is working on a COVID-19 vaccine that will appear to save the world but secretly damn us all to an eternity of surveillance can claim some philosophical cover from the concerns of civil libertarians. But it borrows most of its conceptual punch from the Bible and its description of the Biblical Mark of the Beast. Bill Gates has talked about an unclear method of requiring citizens to present a certificate verifying that they are currently disease-free in order to enter a store. To some people, that sounds like the imposition of a mark that everyone must have in order to shop or engage with society.
The idea that we must be aware of this threat to our fundamental liberty and freedoms dovetails perfectly with the philosophical argument that the shutdown was an overblown, fearful reaction to a non-issue. It transfers the locus of blame from a shapeless virus or the vague specter of government to a single individual, and it offers a means by which ordinary Americans can reclaim their power in a time when they feel powerless. Rejecting the idea of a vaccine is painted as a demonstration of one’s faith in God or of your commitment to the sacred ideals America was founded on rather than a supposed commitment to live in fear. The overlap between these groups isn’t perfect, but it doesn’t have to be.
Arguments about the Mark of the Beast are also fundamentally arguments about purity and keeping the body sanctified. It doesn’t matter if that means rejecting the idea of a coronavirus vaccine over fears of contamination from “chemicals” or RFID chips or because of religious fears. Anti-vaccination advocates are another group of people who are strongly motivated by purity/contamination concerns, and this line of thinking often manifests itself in surprising ways in the United States.
Even though many conspiracy theorists are not explicitly religious, there are common themes of collapse and renewal to be found in the Book of Revelation and in the idea that America is under siege and in danger of fundamental collapse, and only the actions of a free and independent group of citizens dedicated to the principles expressed in the founding of the Republic is capable of saving it. In the late 1990s, those self-described groups of people were the various militia movements, with their certainty that black helicopters would soon arrive with troops and launch the New World Order. In the 2010s, we saw a very similar argument with a hefty dash of birtherism in fears over “Jade Helm.” Now, it’s COVID-19 vaccines.
What sets the coronavirus vaccine hoax apart from the idea that 5G or GMOs cause or contribute to COVID-19 is that the decision to take a vaccine is a choice. It’s very difficult to avoid both GMOs and cell radio signals, but you can make the decision not to submit to medical treatment. The fact that there’s choice involved allows the body purity arguments and the “Be a righteous citizen of the Republic by being one of the select few who knows the truth” arguments to team up and go looking for Objective Reality so they can break its legs in some dark alley. It also ties to fears about Silicon Valley and the concentration of power in the hands of the few — something Democrats also worry about, of course, but have not widely identified with Bill Gates in this instance.
As for why it’s far more attractive to Republicans than Democrats? Probably some combination of a general distrust of technology, distrust of experts, distrust of perceived liberals, distrust of those who criticize President Trump, distrust of those who have called for continued action to minimize the threat of coronavirus, and a general belief in some quarters that America took the wrong path in dealing with COVID-19. A study in 2019 suggested that populists — who tend to distrust both experts and democracy — were much more likely to believe in conspiracy theories than other types of people. YouGov’s research suggests roughly 24 percent of Americans identify as populists according to their metrics; details available here.
NASA and SpaceX are just hours away from making history. After years of development and testing, SpaceX is set to become the first private spaceflight firm to carry American astronauts into space as part of NASA’s Commercial Crew Program. This is also the first crewed launch from US soil since the retirement of the Space Shuttle, a long-overdue step that will free NASA from reliance on Russian Soyuz launches. With the big moment approaching, we chatted with former astronauts Cady Coleman and Nicole Stott, both veterans of multiple Space Shuttle launches, to see how they felt about the return of crewed spaceflight to the US. Spoiler: pretty excited.
How We Got Here
SpaceX has moved quickly to develop the technology that makes its launch platform suitable for NASA service — it’s providing both the rocket (Falcon 9) and crew module (Dragon) for these launches. Former astronaut and retired USAF Colonel Cady Coleman says that has a lot to do with the way a private aerospace firm operates. “It’s a different world now. If you think back to the early space program, the government really was the designer. Working together [with private firms] is more necessary than it ever was because of the ability that commercial companies have. They can take bigger risks with [developing] hardware.”
SpaceX didn’t get here on its own, though. “The SpaceX team has had access all along to the lessons learned from NASA’s other programs,” says former astronaut Nicole Stott. “That’s a real advantage when going into a new project. When we have public-private partnerships, we can avoid re-learning the same lessons.”
Today’s launch is primarily about the Crew Dragon capsule, sometimes called Dragon 2. This is the same type of spacecraft that SpaceX used in last year’s uncrewed Demo Mission-1. Unfortunately, that craft exploded when it was undergoing testing back on Earth. SpaceX and NASA had to push back the launch timeline, but all systems are go just a year later. That might seem fast to an outside observer, but both Coleman and Stott expressed great confidence in the way NASA and its commercial crew partners have worked together. “We’ve always had a ‘here’s how we can’ not ‘here’s why we can’t’ approach,” says Coleman.
Today’s launch, known as Demo Mission 2 (DM-2), will take place on the historic launch pad 39A at Kennedy Space Center. SpaceX is using a Falcon 9 Block 5 design, the same rocket the company uses for cargo missions on a regular basis. This core in particular (B1058) has never been launched before, but SpaceX will try to land it on its drone ship after it detaches from the Dragon.
If all goes to plan, the Falcon 9 carrying astronauts Doug Hurley and Bob Behnken will leave the launchpad at 4:33 PM EDT. This launch will differ from past crewed missions in several ways, and it took time for SpaceX and NASA to come together on the details. “In getting ready for launch, there are some things that are just a given,” says Coleman. “NASA has done this and that forever, but SpaceX says ‘we’re not doing it that way.’ And some of that is maybe not well-thought-out, and some of that is actually a really good new idea.”
Unlike previous NASA crewed launches, SpaceX will fuel the Falcon 9after Hurley and Behnken board the spacecraft. The launch and approach to the ISS will be automated like the Demo-1 mission last year, but Hurley and Behnken will still have the option to manually control the capsule. According to Nicole Stott, that wasn’t SpaceX’s intention at the outset.
“For a long time, SpaceX as a company thought they wouldn’t need those manual backups anymore — you know, we can do everything redundantly with the electronics in the spacecraft,” she said. “Maybe at some point we’ll get there, but I think when there are humans in the spacecraft, we’re looking for that manual backup.”
Robert Behnken (left) and Douglas Hurley (right)
While the Dragon 2 has superficial similarities to the older capsule-based spacecraft like the Apollo command module and Soyuz, it’s a much more futuristic design. Nicole Stott describes it as having a “new car feel” with a “simple elegance.” Stott says the Space Shuttle cockpit had displays, switches, and circuit breakers almost surrounding the crew. By comparison, the Dragon 2 has a few large touchscreens and compact manual controls.
After reaching orbit, Hurley and Behnken will be able to remove their restraints and float around the capsule. As this is a demo mission, NASA will most likely have an array of tasks for the crew to complete as they monitor the Dragon’s performance. Just like the ascent, rendezvous and docking will be controlled autonomously by the Dragon. After a brief stay aboard the ISS, Hurley and Behnken will return to Earth in the Dragon.
The Crew Dragon should splash down in the Atlantic Ocean with parachutes, which SpaceX tested one final time early this month. The Dragon capsule technically has the ability to land propulsively with its SuperDraco engines, which also power the launch abort system. However, NASA opted for the tried-and-true parachute option. That’s not to say SpaceX will never have a chance to use those engines for landing.
“I think we’re going to continue looking at [propulsive landings] as an option,” says Stott. “When you get into reduced gravity environments like landing on the moon or on Mars, we’ve done that in the past. I think we’re looking at what makes the most sense with the time we have available.” Essentially, NASA needs a reliable US spacecraft now, and we know parachutes work.
After the completion of DM-2, the Crew Dragon will be ready to ferry astronauts to and from the ISS on a regular basis. Of course, that assumes everything goes well. Spaceflight is dangerous, even more so when it’s a new spacecraft. Both SpaceX and NASA have maintained a positive outlook — NASA actually chose to publicize the overall Loss Of Crew (LOC) risk of 1 in 276. Prior to the first Space Shuttle launch, the agency’s engineers estimated the LOC as at least 1 in 500. After reviewing real mission data, they said it was probably closer to 1 in 12. By the end of the Shuttle program, it was 1 in 90.
We can only hope that the mission is a complete success and these launches become non-events — astronauts just hop on their space bus and commute to the ISS. But today, Hurley and Behnken are making history. Nicole Stott put it succinctly, saying, “They’re setting off a new era of getting back into space from the US, helping us expand what we do with all our partners in space. And as always, with the goal of improving life here on Earth.”
This historic launch will take place at 4:33 PM EDT today with live streams from both NASA and SpaceX. National Geographic and ABC News will also have two hours of live coverage starting at 3 PM EDT today on “Launch America: Mission to Space,” featuring Cady Coleman among others. In the event of bad weather, NASA has another launch window set for May 30th.
Galaxies come in all shapes and sizes, but few are ring galaxies. Astronomers studying one of these objects on the other side of the universe have noted some startling properties. As with other ring galaxies, this one is not a result of internal processes driving stars apart. The team believes R5519 is the result of a cataclysmic collision in the early universe in which another object punched a hole through the middle of R5519.
This galaxy is a whopping 11 billion light-years away, meaning it’s also an artifact of the early universe. It shows a large ring-shaped perimeter with no discernable central bulge. The ring is about 42,400 light-years across, and the hole through its center is 17,612 light-years wide. By comparison, the Milky Way is between 150,000 and 200,000 light-years across.
Some ring galaxies studied in the past showed evidence of orbital resonance or accretion of material from other objects as the driving force behind the “ring.” However, orbital effects should only occur in barred galaxies, which R5519 is not. Likewise, accretion wouldn’t disrupt the core as seen in R5519. R5519’s unusual shape and origins could change the way scientists understand the formation of galaxies during the first few billion years after the Big Bang.
So, a collision seems possible simply by process of elimination. However, there’s additional evidence that R5519 had a violent past. The rate of star formation in the ring is extremely high — the team estimates that 80 solar masses of new stars are born each year in this region. That’s what the team refers to as a “cosmic ring of fire.” That could be a result of the gravitational turmoil of another galaxy passing through the center of R5519. The gravitational waves would propagate outward, condensing dust and gas into regions that promote the formation of new stars.
This offers an interesting chance to study galaxy formation in the early universe because disc-shaped galaxies were rare until just a few billion years ago. In the first few billion years, most galaxies were warped into irregular blobs, but you can’t have a ring galaxy unless it started as a disc. The discovery of R5519 suggests disc-shaped galaxies were not unheard of in the early universe, but a disc that became a ring is certainly something of note that warrants further study.
If you missed out on buying a new computer over the Memorial Day holiday weekend, you’re in luck. Today, you can get one of Dell’s best gaming laptops with more than $600 cut off the price tag. You can also save 30 percent on a new Apple iPhone XS Max.
If you want a fast notebook with plenty of performance for running the latest games, you may want to consider Dell’s Alienware M15 R1. This system was literally built for gaming and it features a fast six-core processor, an Nvidia GeForce RTX 2070 Max-Q GPU, and a high-quality 1080p 144Hz IPS display. You can get this system from Dell marked down from $2,194.99 to $1,574.99 with promo code LCS10OFF.
Apple’s iPhone XS Max is a feature-rich smartphone with a large 6.5-inch OLED display and a powerful A12 Bionic processor. For a limited time, you can get one of these phones with 64GB of internal storage from Woot marked down from $999.99 to just $699.99.
Amazon’s newest Kindle comes with a built-in front light, a high-quality 167ppi display, and a battery that can last for weeks without needing to be recharged. For a limited time, you can get this Kindle along with a three-month subscription to Kindle Unlimited marked down from $89.99 to just $59.99.
Enjoy your games to the fullest with a blazing 240Hz monitor! In addition to its extreme refresh rate, this display features support for both FreeSync and G-Sync and a fast 1ms response time giving you a highly responsive gaming experience. Right now you can get this display from Dell marked down from $709.99 to $399.99.
Logitech’s G604 Lightspeed is a high-end wireless mouse that’s well-suited for gaming and work. Logitech claims it has excellent battery life and can last for up to five and a half months on a single AA battery. Right now you can get one of these mice from Amazon marked down from $99.99 to just $69.99.
This laptop comes equipped with one of Intel’s new 10th generation Core i7-10510U processors that has four CPU cores clocked at up to 4.9GHz. The system also comes with fast NVMe SSD storage and a 1080p display, which makes it well-suited for just about any type of work or any non-gaming activity. You can get it now from Dell marked down from $1,427.14 to just $779.00 with promo code SUM999LT5590.
Note: Terms and conditions apply. See the relevant retail sites for more information.For more great deals, go to our partners at TechBargains.com.
The global economy can rise and fall, but a few central truths will always remain. And no matter where the ups and downs of world business are whipsawing everyone on a given day, companies have to depend on steady, proven, gifted project managers to lead teams, chase goals and carry organizations to successful outcomes.
In a business climate rife with remote workers and shifting economic fortunes, it’s now going to be even more vital than ever that the right project managers are in place to ensure efficiency and productivity.
One of the most time-honored paths to achieving project manager status is to get certified as a Project Management Professional (PMP) by one of the field’s most respected organizations, the Project Management Institute (PMI).
This three-course collection begins with the Certified Associate in Project Management (CAPM) Certification course, an entry-level certification for showing you understand the fundamentals, processes, and terminology of project management. Over 14 hours of self-paced training with 5 mock exams and 750 unique questions, students tackle basic management processes of the PMI bible, the Project Management Body of Knowledge (PMBOK) 6th Edition.
Next, Project Management Professional (PMP) Certification lays the groundwork for using your CAPM skills to earn PMP status, working in virtually any industry organizing resources, overseeing personnel, setting timetables, and managing expectations on any critical project.
This course covers everything in the PMP syllabus and leaves students ready to take and pass the PMP exam.
Finally, the PMI Risk Management Professional (PMI-RMP) course rounds out the training, featuring another 11 hours of examination on solving advanced project management issues around growth, complexity, and diversity.
This methodical approach to devouring and using the PMI and PMBOK methods can put students on the fast track to getting employed as a certified project manager, earning the upward mobility and job security most employees are looking for these days.
A nearly $200 value, you can earn all this project management training now for almost 90 percent off that total, only $24.99.
Note: Terms and conditions apply. See the relevant retail sites for more information.For more great deals, go to our partners at TechBargains.com.
Motorola was one of the first major smartphone makers to jump on the foldable phone bandwagon by reviving its long-dead Razr brand. The foldable Razr earned points for its slick design, but the specs and durability left something to be desired. The latest out of the rumor mill is that Motorola is planning another foldable Razr to launch this year that will (hopefully) solve many of the first-gen’s problems.
The current Razr device has a large 6.2-inch foldable OLED inside a clamshell form factor. Motorola made a big deal out of the way its custom hinge design bends the display into U-shape when closed. That reduces the appearance of a crease. Although, many owners have found the hardware to be less reliable than it ought to be for $1,500.
A Motorola executive confirmed last week that a second-gen Razr was on the way, but a separate leak revealed some details of the device. The phone is codenamed “smith” and carries model number XT2071-4. Like the first-gen phone, it will be a clamshell-style folder with a large internal OLED display and a smaller screen on the exterior. The camera array will get a big boost from the 16MP primary and 5MP selfie on the current Razr. Sources peg the new phone as having a 20MP selfie camera and Samsung’s 48MP ISOCELL Bright GM1 as the primary.
The current Razr has a midrange Snapdragon 710 ARM chip, which is much less powerful than you’d expect for the $1,500 asking price. It also sports 6GB of RAM and 128GB of storage. The new Razr should give all those specs a boost with 8GB of RAM, 256GB of storage, and a Snapdragon 765 chip.
The addition of a Snapdragon 765 means 5G is on the menu for this phone. While the 765 can do millimeter-wave 5G, it’s not as fast as the 865 in that respect. It’s also unclear whether current millimeter-wave antennas could fit in a folding phone like the Razr. Still, it will at least support sub-6GHz 5G signals via the integrated X52 modem. The first Razr had short battery life with its meager 2,510mAh cell. The new version will reportedly bump that up to about 2,800mAh. However, 5G will consume more power, so it might be a wash.
The leak doesn’t include information about carrier support or pricing, but it’s sure to be a spendy phone. Folding phones are still a luxury item, and they will be for some time.
Thus far, AMD has been quiet about its plans for any Ryzen refresh cycle in 2020, though the ongoing coronavirus pandemic has undoubtedly scrogged up some of the company’s plans. We’ve seen a pair of leaks surface online claiming to share details on what the firm has planned through the fall, though as always, take these leaks with salt.
First, let’s talk about CPUs. The report here is that AMD will add three Ryzen Refresh CPUs to its lineup, with a Ryzen 3900XT, Ryzen 7 3700XT, and a Ryzen 5 3600XT. These three chips would arrive with higher base and boost clocks and an estimated 1.05x – 1.10x performance increase over their predecessors. The branding here is divided — HotHardware reports that the chips might also increment the model number by 50 points (3950X, 3750X, etc). Either one of these is plausible, but if I had to guess, I’d guess AMD will either use the numbers or the numbers+letters. Differentiating your parts based solely on one digit (X versus XT) isn’t smart if you want consumers to be able to tell them apart and not buy the wrong chip for their own hardware. AMD also hasn’t used the “XT” moniker for CPUs before, so deploying them here would be a first for the company.
Meanwhile, over in graphics, AMD is said to be planning a Big Navi with up to 5,120 streaming compute units, a die of supposedly 505mm2, and 50 percent improved performance per watt. The 50 percent performance per watt uplift is something Lisa Su has spoken about before, so we know that part of the rumor is legitimate according to AMD’s guidance. The 505mm2 falls under the category of “things that could be true.” The 5700 XT was 251mm2, and Big Navi looks like it’s roughly the size of two smaller Navis, so that all lines up.
The specifics of the rumor, however, don’t make a ton of sense unless we assume a few things about AMD’s future product mixture. Supposedly there are three Navi chips coming — Navi 21, Navi 23, and a Navi 10 Refresh. Navi 21 is Big Navi, with up to 5,120 cores.
The descriptions for the specific GPUs make no sense unless the “similar to” means “in the same relative position with vastly higher performance and more expensive price points.” AMD’s 80-CU RDNA2 isn’t going to be similar to the 5700 XT in price or performance unless something goes catastrophically wrong. We don’t know anything about Navi 23, except that the die is supposedly on par with the original Navi 10. This would imply that either Navi 23 is denser than Navi 10 or it offers significantly higher performance per square millimeter.
Squeezing four SKUs out of Navi21 would be unusual, so I’m not quite sure what to make of that. Typically, Nvidia and AMD use their highest-end GPU dies to power 1-2 cards, not four of them. Either way, Navi21 has to be intended for battle against Nvidia’s uppermost echelons, with the smaller Navi23 taking over in the spots where the 5700 XT and 5700 sit now. This would clear the way for refreshed Navi10 cards to take price cuts, likely pushing Polaris down to the lowest market tiers or out of the space altogether.
What doesn’t quite make sense about all of this is that it leaves AMD with a rather large number of SKUs. Nvidia’s current leading-edge lineup is the RTX 2080 Ti, followed by the 2080 Super, 2070 Super, 2060 Super, 1660 Super, and 1650 Super. This leak contemplates four high-end GPUs, three Navi23 GPUs, and three Navi10 cards. That’s considerably more SKUs than AMD has previously fielded.
As far as the CPU rumors go, I find them entirely believable. A 5-10 percent uplift for a Ryzen refresh cycle isn’t overwhelming, but it moves the ball forward a bit on the way to Zen 3, and it’s easy to believe that there was some headroom to be found in TSMC’s 7nm process after further refinement. I don’t expect any core count increases this year or in the near-term future — having just pushed the boundary above the point where Windows can easily take advantage of its thread counts, AMD is under no particular pressure to boost core counts again.
The GPU rumors really only cover code names, but it makes sense that AMD would hit Nvidia from top to bottom. The big unknown here is Ampere, and how much performance it will offer out of the gate. AMD could find itself sitting comfortably or see the rug yanked out from under its new intended competitor, and we really don’t know which to expect. Between the two families, CPUs are expected in-market first, with GPUs not launching until September, but both of those statements are themselves rumors and should be treated accordingly.
Update (5/25/2020): This article is several years old, but it’s one of my favorites and one of the most interesting topics we’ve talked about. There’s an old saying: “What hardware engineers create, software engineers take away.” That’s not the fairest way to look at the situation — modern computers can do far more than old ones — but the struggle to keep systems responding quickly while ramping up their complexity is not a series of unbroken triumphs. Even top-end PCs struggle to offer the latency of machines that offered a fraction of their performance.
Original story continues below:
Comparing the input latency of a modern PC with a system that’s 30-40 years old seems ridiculous on the face of it. Even if the computer on your desk or lap isn’t particularly new or very fast, it’s still clocked a thousand or more times faster than the cutting-edge technology of the 1980s, with multiple CPU cores, specialized decoder blocks, and support for video resolutions and detail levels on par with what science fiction of the era had dreamed up. In short, you’d think the comparison would be a one-sided blowout. In many cases, it is, but not with the winners you’d expect.
Engineer Dan Luu recently got curious about how various devices compare in terms of input latency. He carried a high-speed camera around to measure input lag on some of them because this is the sort of awesome thing engineers sometimes do. What he found is rather striking, as shown by the table below:
The system with the lowest input latency — the amount of time between when you hit a key and that keystroke appears on the computer — is the Apple IIe, at 30ms. A respectable third-place goes to a Haswell-E system with a 165Hz monitor. #T refers to the number of transistors in each chip; the color-coding shows that chips with higher numbers of transistors tend to be in systems with more latency, and faster systems tend to be older than slower ones.
Improving monitor refresh rate clearly helps; the same Haswell-E rig has 90ms less input latency on a 165Hz display compared to a 24Hz display. If you’ve ever used a display with a 30Hz refresh rate, you’ve likely seen this; the difference between 30Hz and 60Hz is easily visible to the naked eye. But it clearly doesn’t make the entire difference in and of itself.
Luu has been doing an in-depth discussion of latency from several angles and we’d recommend his articles on keyboard and mouse latency if you want to follow up. In some cases, it’s literally impossible for a system to offer lower latency than an Apple IIe because the keyboard’s latency alone may be higher than the Apple system. Also, gaming keyboards may not be faster than normal keyboards, and even if they are, median keyboard latency is high enough that 3.5ms doesn’t improve the total input latency very much.
Why Modern Systems Struggle to Match Old Ones
This boils down to a single word: Complexity. For the purposes of this comparison, it doesn’t matter if you use macOS, Linux, or Windows. An Apple IIe with an open terminal window and nothing else is sitting there, waiting for input. Its keyboard is wired for an effective polling rate of 556Hz and a custom chip for keyboard input as opposed to polling the keyboard with a microcontroller. This video, from Microsoft’s Applied Sciences Group, discusses why low latency input is important.
An Apple IIe isn’t handling sophisticated multi-tasking commands. It isn’t juggling background threads, or dealing with multiple applications that aren’t designed to be aware (or careful) of one another. It isn’t polling a huge array of devices that range from audio and network controllers to discrete GPUs and storage. The Apple IIe OS doesn’t use a compositing window manager, which adds latency. This article, by Pavel Fatin, is an in-depth breakdown of latency processing and discusses how much delay each step in a modern system adds, from keyboard scan to final output.
I ran this test in SublimeText 2, not PowerShell, so don’t compare it against the results above. One thing it illustrates? Refresh rates really matter. The first two results are @60Hz, the third is at 24Hz.
It should also be noted that the speed of text input can vary from terminal to terminal. PowerShell is now the default terminal of Windows 10, and text input speed in PowerShell is… bad. I write my stories by default in SublimeText, which has little-to-no observable lag. PowerShell, in contrast, is so laggy, you can perceive a gap between what you’re typing and when it appears (although not a particularly large one).
Either way, this article is an interesting example of how, despite myriad advances, low-latency input remains challenging. Complexity is often a very good thing, but we pay a performance penalty for it.
Scientists have identified thousands of exoplanets thanks to instruments like the Kepler Space Telescope. With each new world we examine, we learn more about how planets develop across the universe. Studying planets as they form would be the holy grail, and astronomers may have spotted a place where we can do just that. The European Southern Observatory (ESO) has released images of a primordial solar system with swirls of gas that could be the beginning of planetary formation.
Astronomers have good evidence that planetary formation takes place in the disc of dust and gas around young stars, but they’ve never been able to take sufficiently sharp images to identify the small eddies that signify a planet is coming into existence. Several years ago, the Large Millimeter/submillimeter Array (ALMA) scanned a star called AB Aurigae, located 520 light-years away from Earth. The data suggested this young star might have small disturbances in the primordial disc suggesting planetary formation. The ESO sought to confirm that with the Very Large Telescope (VLT).
The VLT has a relatively new adaptive optics instrument called SPHERE (Spectro-Polarimetric High-contrast Exoplanet REsearch). This allows the telescope to capture higher quality images with better contrast, but only in a very narrow field of view. That’s perfect for taking a close look at a single star, though. The ESO conducted an observational campaign of AB Aurigae in late 2019 and early 2020, resulting in the newly released images.
The images of the AB Aurigae system showing the probable location of a forming exoplanet.
The orange swirl is the dust and gas orbiting AB Aurigaem. The dark region near the center is about the size of Neptune’s orbit around the sun — even with SPHERE, we can’t zoom in beyond this level without losing detail. However, it’s sufficient to make out a probably baby planet. The brighter “twist” highlighted above is precisely what astronomers expected a planet might look like at this stage of development. Over eons, material will gather together, exerting gravitational influence on nearby space. Eventually, it bulks up and becomes spherical by absorbing everything else in its orbit, and then it’s what we’d call a planet.
The ESO is currently building the 39-meter Extremely Large Telescope (they’re clearly great at naming things) to build on the work of LAMA and SPHERE. When it comes online in 2025, the Extremely Large Telescope should be able to take a closer look at this probably infant exoplanet. The upcoming James Webb Space Telescope could also take a closer look. It will have a smaller mirror, but its vantage point in orbit will be much better.
Nvidia has created the first generative network capable of creating a fully functional video game without an underlying game engine. The project was begun to test a theory: Could an AI learn how to imitate a game well enough to duplicate it, without access to any of the underlying game logic?
The answer is yes, at least for a classic title like Pac-Man — and that’s an impressive leap forward in overall AI capability.
GameGAN uses a type of AI known as a Generative Adversarial Network. In a GAN, there are two adversarial AIs contesting with each other, each trying to beat the other.
Here’s a hypothetical: Imagine you wanted to train a neural network to determine whether an image was real or had been artificially generated. This AI starts with a base set of accurate images that it knows are real and it trains on identifying the telltale signs of a real versus a synthetic image. Once you’ve got your first AI model doing that at an acceptable level of accuracy, it’s time to build the generative adversary.
The goal of the first AI is to determine whether or not an image is a real or fake. The goal of the second AI is to fool the first AI. The second AI creates an image and evaluates whether or not the first AI rejects it. In this type of model, it’s the performance of the first AI that trains the second, and both AIs are periodically backpropagated to update their ability to generate (and detect) better fakes.
The GameGAN model was trained by allowing it to ingest both video of Pac-Man plays and the associated keyboard actions used by the player at the same moment in time. One of Nvidia’s major innovations that makes GameGAN work is a decoder that learns to disentangle static and dynamic components within the model over time, with the option to swap out various static elements. This theoretically allows for features like palette or sprite swaps.
A video of GameGAN in action. The team has an approach that improves the graphics quality over this level, and the jerkiness is supposedly due to limitations in capturing the video output rather than a fundamental problem with the game.
I’m not sure how much direct applicability this has for gaming. Games are great for certain kinds of AI training because they combine limited inputs and outcomes that are simple enough for an AI model to learn from but complex enough to represent a fairly sophisticated task.
What we’re talking about here, fundamentally, is an application of observational learning in which the AI has trained to generate its own game that conforms to Pac-Man’s rules without ever having an actual implementation of Pac-Man. If you think about it, that’s far closer to how humans game.
While it’s obviously possible to sit down and read the manual (which would be the rough equivalent of having underlying access to the game engine), plenty of folks learn both computer and board games by watching other people play them before jumping in to try themselves. Like GameGAN, we perform static asset substitution without a second thought. You can play checkers with classic red and black pieces or a handful of pebbles. Once you’ve watched someone else play checkers a few times, you can share the game with a friend, even if they’ve never played before.
The reason advances like GameGAN strike me as significant is because they don’t just represent an AI learning how to play a game. The AI is actually learning something about how the game is implemented purely from watching someone else play it. That’s closer, conceptually, to how humans learn — and it’s interesting to see AI algorithms, approaches, and concepts improving as the years roll by.
Memorial Day weekend is finally here, and there are dozens of excellent deals to help you celebrate the holiday with a little online shopping. Among the best deals available is a $100 discount on Apple’s Watch Series 5, which drops the price to just $299. You can also save on an Arlo Pro 2 home security camera system.
Apple’s newest smartwatch is the company’s first to feature an always-on display, which remains illuminated and provides on-screen information for the entire time the watch remains on. Like last year’s model, the new Watch Series 5 offers up to 18 hours of battery life on a single charge. For a limited time, you can get one from Amazon marked down from $399.99 to $299.00.
Dell designed this modern laptop with one of Intel’s new Core i5-1035G1 processors and a fast 256GB M.2 NVMe SSD. The notebook also features an LED-backlit keyboard, which makes the system look cool as well as being useful while typing in the dark. If you’ve been looking for a computer for work or for your child to do online classes, this system should work well for you. Currently, you can get this system from Dell marked down from $709.99 to $549.00.
Arlo’s Pro 2 security system includes four wireless 1080p cameras. This makes it possible to strategically monitor the entrances to your home and keep watch for intruders and any other suspicious activity. Currently, you can get this set from Amazon marked down from $649.99 to $399.99.
Sometimes, the rich just do get richer. The market for qualified product managers has been red hot in recent years, leading to average annual salaries of over $107,000 a year. As if that wasn’t enough incentive for job seekers to give product management a look, companies in the WFH era are more conscious than ever about formalized oversight and structure around their biggest projects.
And massive could also be used to describe this package of training. In all, students get access to 13 different courses, all with an eye toward boosting efficiency, building smart, productive teams, and getting products completed and projects finished on-time and on-budget.
Even those who have never worn the crown of leadership can get a firm grip here on how to lead, including the way of thinking behind effective management (Lean Management), how to plan and properly execute a project (Project Management Fundamentals), the four major phases of the product lifecycle (Become a Product Manager) and even tips for landing a project manager position (Skillsets to Shift Your Career to Product Management).
Of course, that’s just the start as the training escalates to personal relationship building around effective meetings and collaboration; as well as hardcore coding disciplines like constructing wireframes, understanding databases and building apps of your own.
The Advanced Product Management and Advanced Product Management #2 courses explore new areas of the position through real world examples, such as using a more modern and agile form of roadmapping, understanding what vision and strategy actually mean, and even communication strategies, career acceleration and becoming a world-class public speaker.
Finally, you’ll also know who to better measure your results with Google Analytics Certification, helping leaders become more data-driven in spotting solid, helpful insights about their work.
With some of these courses costing as much as $200 separately, you can literally save thousands by getting the complete bundle now for only $39, just $3 per course.
Note: Terms and conditions apply. See the relevant retail sites for more information.For more great deals, go to our partners at TechBargains.com.
The ability to restore sight to the blind is one of the most profound acts of healing medicine can achieve, in terms of the impact on the affected patient’s life — and one of the most difficult for modern medicine to achieve. We can restore vision in a limited number of scenarios and there are some early bionic eyes on the market that can restore limited vision in very specific scenarios. Researchers may have taken a dramatic step towards changing that in the future, with the results of a new experiment to design a bionic retina.
The research team in question has published a paper in Nature detailing the construction of a hemispherical retina built out of high-density nanowires. The spherical shape of the retina has historically been a major challenge for biomimetic devices.
Light enters the eye through the lens, which is curved — which means the light that hits the retina has already been curved. When you use a flat sensor to capture it, there’s an intrinsic limit to how much the image can be focused. This seems like the sort of thing cutting-edge AI might be able to help with, but the amount of processing power available at the back of a human eyeball is limited and the latency requirement for vision is pretty much nil. Alternatively, we could solve the hemisphere problem. That’s what Zhiyong Fan, an electronic and computer engineer at the Hong Kong University of Science and Technology, and the rest of the research team did.
They started with a hemisphere of aluminum foil (as one does). Electrochemical treatment transformed the foil into an insulator known as aluminum oxide, and left it studded with nanoscale pores across its service. These densely-clustered holes became the channels for the perovskite nanowires that mimic the function of the retina itself. Perovskite is used in the manufacture of solar cells. Once the nanowires grew, the researchers capped the eye with an artificial lens and filled it with an ionic liquid to mimic the vitreous humor in our own eyeball.
This ionic liquid is important to the process, allowing the nanowires to detect light and transmit its signals to external, image-processing electronics.
The performance of the artificial eye is impressive. Because it isn’t limited by the biological parameters of our own lens, it can respond to wavelengths of light up to 800nm. The human visual range tops out around 740mm; colors above this wavelength appear black to us. If we could see at 800nm, we’d be seeing into the near-infrared band (considered to be 750 – 1400nm). Processing time for light patterns is ~19ms, or half the time of the human eye. Cutting the eye’s reaction speed to 19ms might reduce total human reaction time — and the artificial eye’s image sharpening and overall clarity were better than those produced by the Mark I Eyeball.
Note: Do not read that as a comment on the nature of frame rates and whether humans can see above a particular framerate threshold. Measured response and recovery times on the human eye range from 40ms to 150ms. Average total human reaction time is between 200ms and 250ms. Exceptional individuals sometimes exceed these speeds; 150ms reaction times are not unknown.
In short, this artificial retina sees better than we do in multiple respects, and as far as I’m aware, this is the first time anything like it has been built. The new retina even lacks a blind spot.
The Long Road Ahead
As Scientific Americandetails, there’s a lot of work to do before a system like this could be integrated into a functional device. Systems like Second Sight (a company we’ve covered before) integrate directly with the brain. This artificial retina doesn’t, which is why I haven’t referred to it as a bionic eye. It is a proof-of-concept artificial retina that might one day be deployed in a bionic eye, provided current problems can be overcome.
Overcoming those problems is going to be difficult. The human visual system is not a camera, even if it can be conceptually described in similar terms. The idea that we’d benefit from the features the sensor offers implicitly assumes we can connect it to the brain seamlessly enough to allow these benefits to manifest. Because there are different forms of blindness, solutions that work for one type may not work for another. Blindness caused by brain damage would be unlikely to be helped by this kind of solution — even a flawless artificial eye won’t let us restore sight to every single person.
Still, the long-term potential here is tremendous. It’s been less than a decade since the first grayscale, low-resolution artificial sensors came to market. Now we’re trying to figure out how to build a plausibly superior system and connect it to the server backend, if you’ll pardon the metaphor. Hopefully we’ll see further advances in the field over the next decade.
Carriers around the world are just starting to roll out 5G service, but AT&T jumped the gun a bit with its “5G Evolution” branding for LTE. Other carriers, consumer watchdogs, and obsessive technology journalists have been pointing out how misleading it can be, and now the National Advertising Review Board (NARB) agrees. AT&T will tone down its 5GE branding, but it’s not completely giving it up.
AT&T debuted its 5GE branding toward the end of 2018 with “5GE” icons attached to the signal bars on smartphones. The carrier also talked about 5G Evolution in its marketing materials. AT&T’s angle was that 5GE would be its stepping stone to true 5G because they include some of the same technologies like 256 QAM, 3-way carrier aggregation, and 4X4 MIMO. However, other carriers already had those technologies deployed on LTE, and they didn’t start calling their networks 5G-anything.
By some measures, AT&T’s 4G network got substantially faster after the 5GE upgrade. AT&T’s LTE spectrum is more split up than the other major carriers, so newer network technologies like carrier aggregation helped devices make better use of the bands. But 5GE was never real 5G even though plenty of AT&T’s customers thought it was. Now that real 5G is starting to roll out, AT&T’s 5GE branding is perhaps even more confusing for consumers, and the NARB has finally told AT&T to knock it off.
The NARB is not an independent body — it’s an advertising industry group that aims to self-police its members. So, you can imagine how misleading AT&T’s marketing would have to be for the NARB to scold the carrier publicly. The board found that AT&T’s addition of “Evolution” to 5G was not sufficiently clear to differentiate it from true 5G service. AT&T “respectfully disagrees” with the decision but says it will comply. Well, sort of.
AT&T has agreed to stop advertising 5GE, but that’s an easy decision to make. AT&T has real 5G service in numerous markets, so it doesn’t need to brag about its fake 5G anymore. Importantly, AT&T will not stop displaying 5GE icons on smartphones. This is arguably the most misleading part of the carrier’s 5GE branding — even people who don’t pay attention to the marketing can be tricked into thinking AT&T is offering 5G in their area. As usual with carriers, it’s two steps forward and one step back.
Mortal Kombat 11 players will soon be able to play out one of the most-discussed cinematic battles of the 1980s that never actually happened. RoboCop and Terminator are both coming to the franchise, finally allowing gamers to answer for themselves what we used to argue over during lunch.
Now, personally, I have to say — I’ve always been 100 percent on Team Terminator on this issue. In fact, I don’t even see how the “PRoboCop” faction even has a leg to stand on.
Let’s examine the facts. Alexander James Murphy is a cop who gets most of his brain shot out by a crime boss before his amazingly corrupt employer literally shoves a few of his organs and parts of his cerebellum and cerebrum into a titanium can.
The T-800, in contrast, is a ruthless, entirely mechanical adversary. It’s much faster than RoboCop. RoboCop is, to be sure, extremely durable — but the T-800 has survived direct hits from grenades, incendiaries, high-speed vehicles, and a truly astonishing number of bullets. It’s simply astonishing to argue that…
Oh. Right. There’s actually a story attached to this. First up, here’s a video that ends with RoboCop delivering one of his fatalities:
There’s also a second video available, this one with a demonstration of the T-800 laying out the pain. Both are full of easter eggs, including references to the RoboCop Versus The Terminator comic, in which RoboCop discovered Skynet had been built in part from his own technology, and plotted for decades to destroy it. The Terminator’s signature step-out recalls an iconic scene from T2, while changing it to fit the Mortal Kombat universe.
Mortal Kombat, MKII, and MK3 were staples of my adolescence and young adulthood, but I haven’t kept in touch with the series much through the intervening games. Visually, it’s impressive, though I’d rather some of the damage inflicted in the periodic close-ups actually remain on the model once we transition back to the fight. Mortal Kombat 11 clearly uses two different rendering approaches throughout the matches, and while the game transitions cleanly and quickly between them, it’s still a little visually jarring. We suddenly move from a battlefield perspective to what looks like an entirely different area, and the lighting model and detail level shift dramatically to accommodate Fatalities or various high-damage attacks. I’m genuinely curious to see what the next console generation brings to the table for fighting games like this — the advent of substantially faster CPUs and ray tracing should allow for some incredible animation and art.
I don’t have the game, but I certainly wouldn’t mind a match or three — if only to put RoboCop in its place, once and for all.
Rivet Networks, the company behind Killer Networking products, has been acquired by Intel for an undisclosed sum. Rivet Networks began life as Bigfoot Networks, with a dramatic “Killer NIC” card that sold for $250 before pivoting to building software solutions to prioritize and classify traffic. The company was acquired by (and spun back off from) Qualcomm, re-emerged as Rivet Networks, and as of yesterday, is now an Intel property.
Since it re-emerged from Qualcomm, Rivet has focused on building relationships with both motherboard and laptop OEMs. The company has shipped its own custom-branded solutions with underlying hardware built by Qualcomm, Realtek, and Intel at various points in time. It’s also offered features you don’t generally find elsewhere, like the option to use wired and wireless ethernet simultaneously, or to route traffic through specific network interfaces.
Over time, Rivet has been picking up more network partners and shipping hardware on a wider range of motherboards, including a partnership with Dell on the XPS product family. Overall, the company’s profile has been rising since the spinoff, and the acquisition today is the logical outgrowth of that trend.
Intel and Rivet Networks partnered to build the Killer 1650X.
So what does Intel plan to do with this acquisition? That’s less clear. The blog post announcing the deal refers to the broad surge in networking traffic that’s happened over the past few months — a subtle nod to the ongoing impact of COVID-19, without actually naming the pandemic. There are no specific references to any projects between the two companies, however, beyond a statement that Intel will continue to license Rivet Networks software to customers. Rivet worked closely with Intel to develop its solutions around the AX201 and Killer AC-1535, so we should likely expect further developments around these products and, presumably, some additional goodies in the future.
As for what this means for the future of PC networking? That really depends on which aspects of the business Intel chooses to emphasize. The recent pandemic has at least temporarily turbocharged the work-from-home community, driving new hardware purchases and efforts to outfit home offices for long-term use. As such, Intel might want Killer for its traffic shaping and prioritizing tech in a business context. Alternately, it could plan to continue to develop the software across the entire spectrum of business and consumer uses.
Ultimately, we read this as a move to boost its networking credentials at a time when home Wi-Fi performance is likely to be top-of-mind for your average consumer than it might be otherwise. Intel has often marketed its own Wi-Fi solutions as specific reasons to buy an Intel laptop, going back to at least the Centrino platform. From that perspective, the company’s decision to buy an enthusiast-oriented network developer makes perfect sense.
This compact mini-PC was designed by Intel with a Core i3-8121U processor that has two CPU cores. Though this system is best used as an SFF work PC, the system also has a Radeon 540 graphics processor, which can run some games with low settings. Right now you can get one from Amazon marked down from $584.00 to $360.00.
AMD’s Ryzen 9 3900X processor is one of the fastest CPUs on the market today with a dozen cores clocked at 4.6GHz and 70MB of cache. For a limited time you can get this blazing fast processor marked down from $499.00 to $409.99 from Amazon.
Working on a 4K monitor has some major advantages including being able to fit more on-screen at any given time. This display from Dell utilizes a 27-inch 4K panel that also supports 1.07 billion colors, making it well-suited for image editing. Right now you can get one from Dell marked down from $719.99 to $579.99.
This external hard drive gives you an enormous amount of storage space at an affordable price. The drive utilizes USB 3.0 to enable a 5 Gbps data connection and it also comes with a two year warranty. Today you can get these drives from Amazon marked down from $249.99 to $199.99.
This compact desktop is all business with a relatively small physical footprint that’s in no way indicative of its performance. At the heart of this PC is an Intel Core i7-9700 processor with six CPU cores that can turbo boost up to 4.7GHz, which gives it exceptional performance for running numerous applications at the same time. Currently, you can get this system from Dell marked down from $1,141.43 to just $689.00.
Eufy designed this slim robovac with a vacuum capable of 1,300Pa of suction. This gives the vacuum the power it needs to help keep your home clean, and it can also last for up to 100 minutes on a single charge. Right now you can get one from Amazon marked down from $229.99 to just $159.99.
Note: Terms and conditions apply. See the relevant retail sites for more information.For more great deals, go to our partners at TechBargains.com.
For long time cable users, cutting the wire can be a scary thought. But that’s only because you have yet to experience the service available from companies like Sling TV, which can function as full replacements to traditional cable. Well, now’s your chance. Sling has made a lot of its content, including over 5,000 TV shows and movies, entirely free. You don’t have to register or enter in a credit card. Just start watching.
Sling TV is a unique streaming service that aims to provide you with a large library of content and access to live TV channels all on one platform. With the free service offered in this deal, you don’t gain access to everything that Sling has to offer, but you will be able to enjoy thousands of videos for absolutely no charge. You won’t even be pressured into registering an account or putting in credit card information.
The free service predominantly includes TV shows and movies that have already aired. Most of the live TV channels won’t be available to you with the free service, but you will be able to watch content on some channels such as ABC and ESTV. A limited amount of content may also be available for other channels, but this may vary over time. Nonetheless, there is still a fair bit of content available through this service and it gives you a chance to try out Sling before buying into one of its color coded service
The company’s main service is organized into two color coded packages that are known as Sling Orange and Sling Blue. Many popular channels including TNT, A&E, AMC, CNN, Comedy Central, History Channel, Lifetime and Cartoon Network are available on both packages. This gives you a solid content line up for the whole family to enjoy, with a few extra channels available depending on which service pack you select.
Sling Blue provides you with access to the most channels and includes major networks such as MSNBC, National Geographic, Syfy, Nick Jr., NFL Network, USA, and the Discovery channel. Sling Orange just adds six additional channels, but if you are a major sports fan this is likely the best package for you. The Sling Orange pack has ESPN, ESPN2 and ESPN3 as well as access to the Disney Channel, Freeform and Motortrend.
Sling’s regular service costs $30/month regularly, but if you enjoy the free service from Sling then you are under no obligation to sign up for the company’s paid content. Instead you are perfectly free to enjoy the free service indefinitely. Stop overpaying for cable and give Sling a try today!
Note: Terms and conditions apply. See the relevant retail sites for more information.For more great deals, go to our partners at TechBargains.com.
A Cisco CCNA (Cisco Certified Network Associate) or CCNP (Cisco Certified Network Professional) certification means you understand how Cisco systems and hardware work together. Considering Cisco products run more than half of the Ethernet switching operations around the world, a full command of the Cisco network product line is one of the most valuable certifications you can hold in the IT world.
All together, this bundle includes almost 100 hours of training digging into every facet of building, managing, securing, and growing a digital system using Cisco hardware and software.
After Cisco consolidated several of its certification exams into a larger, all-inclusive Cisco CCNA 200-301 earlier this year, two courses here are focused directly on helping students pass this all-important test.
After all the training in Cisco CCNA 200-301 Bootcamp and New Cisco CCNA (200-301) Volume 1: The Complete Course, you’ll understand all about networking, routers, switches, and how to configure all of it. Training under real-world conditions in hands-on labs and other exercises, you’ll develop the skills needed to work on enterprise production networks.
The next logical step for those looking to advance their networking career within Cisco is the Cisco CCNP T-Shoot (300-135) test — and one of the courses included will help you acquire a better understanding of the CCNP T-Shoot requirements. Meanwhile, the new CCNA R/S (200-125) exam centers around configuring default, static, and dynamic routing, so you’ll get schooled in those disciplines as well.
Finally, students will also explore training to understand the hybrid-distance-vector routing protocol EIGRP; the fundamentals of Multiprotocol Label Switching (MPLS); and the advanced routing, switching, troubleshooting, and security that will help you in preparing for the CCNP Enterprise certification exam.
Regularly a $199 value, getting ready to pass these critical Cisco exams will now only cost you just $34.93, a savings of over 90 percent off.
Note: Terms and conditions apply. See the relevant retail sites for more information.For more great deals, go to our partners at TechBargains.com.
Microsoft and OpenAI announced a partnership last year to develop new artificial intelligence technologies, and Microsoft just revealed the first product of this deal: a massively powerful supercomputer. The system is one of the top five most powerful computers in the world, and it’s exclusively for training AI models. The companies hope this supercomputer will be able to create more human-like intelligences. We just hope those intelligences will not exterminate humanity.
Microsoft didn’t say where exactly its new Azure-hosted supercomputer ranks on the TOP500 list; just that it’s in the top five. Based on the last list update in November 2019, Microsoft’s system is capable of at least 38,745 teraflops per second, which is the peak speed of the number five ranked University of Texas Frontera supercomputer. Although, it could be as high as 100,000 teraflops without moving up the list — there’s a big gap between numbers four and five.
While we don’t have measurements of its raw computing power, Microsoft was happy to talk about all the hardware inside its new supercomputer. There are 285,000 CPU cores, which sounds like a lot. However, that’s less than any of the supercomputers currently in the top five. Microsoft’s AI computer also sports 10,000 GPUs and 400 gigabits of data bandwidth for each GPU server.
You might know OpenAI from its work on GPT-2, the fake news bot that the company initially deemed too dangerous to release. OpenAI used a technique called self-supervised learning to create GPT-2, and it will be able to do more of that with the new supercomputer. In self-supervised learning (sometimes called unsupervised learning), computers create models by assimilating large amounts of unlabeled data, and the humans can make adjustments to the model. This has the potential to create much more nuanced and effective AI, but it takes a lot of processing power.
We can only guess at what OpenAI will be able to develop with one of the world’s fastest supercomputers at its disposal. Microsoft and OpenAI believe that a powerful computer with reinforced learning techniques can learn to do anything a human can do — it’s just a matter of time and scale. In a human brain, there are trillions of synapses carrying electrical impulses that create conscious thought. In AI, the equivalent is a parameter. The latest OpenAI model has about 17 billion parameters, and the companies think parameters will reach into the trillions very soon.
It has been nearly 20 years since a visibly sweaty Steve Ballmer took the stage at a Microsoft conference to scream his support for “developers, developers, developers!” Microsoft still knows the value of a vibrant developer community, but it’s a little less direct in its praise. Current CEO Satya Nadella kicked off the company’s first-ever online-only Build conference, and there was a neat little Easter egg for developers in the background.
The annual Build conference is aimed at developers using Microsoft technologies like Windows and Azure. It’s the perfect time for the company to show how much it cares about its development community, but the days of executives running across the stage and chanting as they pump their fists are over. It only worked in 2000 because that whole display was very on-brand for Ballmer — Satya Nadella is a more reserved guy who shows his admiration with cleverly coded signs.
Nadella gave a speech on the Build 2020 video stream, standing in front of a shelf of knickknacks. Amongst the statues, photos, and whatnot was a small sign reading “RGV2cw.” It seemed like nonsense, but why would someone make the sign unless it had some meaning. It didn’t take long for Microsoft’s army of developers to figure it out. The sign reads “devs” in base 64.
Base 64 can carry data in binary formats on platforms that only support text content. It’s particularly common on the web where it can encode image files and other binary data in text containers like HTML and CSS. And of course, you can encode any bit of text you want in base 64.
It’s not an effective way to relay a message, but it’s one that’s sure to catch the attention of developers. We’d still prefer a perspiring executive running across the stage, yelling at the top of his lungs, but we can understand why Nadella might not want to do that.
Do you think Microsoft has regained its geek cred? Let us know in the comments.
Last week, we covered the news that AMD’s upcoming Zen 3 CPUs (presumably to debut under the Ryzen 4000 brand) would be incompatible with the company’s previous X470 and B450 motherboards. AMD has since decided to reverse course, citing end-user unhappiness with the decision as a major reason for its course correction.
The original reason for AMD’s decision not to update the UEFI on certain platforms is because apparently some early AM4 CPUs can only support a 128Mb UEFI. Most AMD motherboards, therefore, ship with this size of BIOS chip — and there’s a limited number of CPUs AMD can support in that amount of space, given that a certain amount of room is typically allocated for the UI interfaces that modern motherboards use. For that reason, AMD was going to shift Zen 3 / Ryzen 4000 support to the B550 platform and beyond.
Now, however, the company has pledged to bring Zen 3 support to 400-series chipsets. AMD clearly isn’t sure exactly how it’s going to pull this off, since the company literally says it’ll “work out a way” to make this happen. Reducing the number of CPUs that a motherboard supports could still work — but which CPUs do you remove to add features? How much onboard memory can you get back if OEMs drop to a basic text UI?
400-series motherboards might deploy a UEFI fork, where older CPUs would remain the default and the relative handful of users with newer chips would need to flash over to a beta UEFI variant. As always, support will depend on motherboard manufacturers being willing to add it in the first place. Since this strategy would require more complexity than simply issuing a UEFI update, some motherboard manufacturers may be more willing to adopt it than others. An updated version of AMD’s B550 chipset support diagram might look a bit like this:
Updated unofficial chipset support diagram for Zen 3.
In short, this is an announcement where the details are still being very much worked out. AMD has listed seven points in its PR statement on how the upgrade is supposed to work. The program is rather different than any upgrade service we’ve heard about, so you’d best read through them if you want to take advantage of it:
1). AMD will develop and share code allowing motherboard manufacturers to support Zen 3 on select B450 / X470 motherboards using select beta BIOSes.
2). Using one of these beta BIOSes will probably remove support for previous CPUs.
3). The upgrade path will be one-way. BIOS flashback will not be supported.
4). BIOSes will only be made available to customers who have verified a Zen 3 CPU purchase, to minimize the chances that any motherboard is flashed for the wrong chip.
5). Beta BIOSes for various boards may not be available at the Zen 3 launch.
6). This will be the last upgrade for 400-series boards. Next-generation CPUs will require a B550 motherboard or later.
7). AMD continues to recommend a 500-series motherboard with a Zen 3 CPU for an optimal experience.
Features like PCIe 4.0, for example, are highly unlikely to be supported on X470 / B450 motherboards in the future with future Zen 3 CPUs.
Overall, this is an impressive example of a company changing course in response to enthusiast feedback and going out of its way to enable support for a relatively small group of people. X370 motherboards, however, will be limited to Zen 2 / Ryzen 3000.
I genuinely thought AMD’s overall upgrade record was defensible before the company made this announcement, so extending additional support to X470 is icing on the cake. At this point, both X370 and X470 will have been supported through two full architectural upgrade cycles — X370 with Zen+ and Zen 2, X470 with Zen 2 and Zen 3.
AMD also reiterated that Zen 3 will be available in 2020, though no information on launch timing, pricing, or core counts is available. I suspect AMD will emphasize improving performance per core rather than core counts this year, in much the same way that Zen+ built on Zen’s IPC and power consumption rather than dramatically tweaking core counts. The market needs time to digest the core count doubling we’ve seen in the past year and I think that’ll happen throughout 2020 and into 2021.
Today, after no small amount of speculation as to its overall performance and power consumption, the Intel Core i9-10900K and associated rest of the 10th Generation desktop family are up for review. It’s a significant moment for Intel, given the dominant position AMD has seized in the desktop market as a whole.
In the three years since AMD launched its first-generation Ryzen CPUs, AMD and Intel have established a response pattern with each other. When AMD took the lead with the Ryzen 1800X, Intel responded with the Core i7-8700K — a six-core CPU with performance strong enough to take the overall performance crown back from the eight-core Ryzen 7 1800X.
Then, in 2018, we saw the 2700X take back the overall performance crown from the 8700K. “Not a problem,” said Intel, unleashing the Core i9-9900K, an eight-core CPU at a substantially higher price, but with some significant performance chops of its own. Then, last summer, AMD launched the new Ryzen 3000 family of CPUs on 7nm… and Intel held its fire. While the two companies tangled in the HEDT segment last year, with Intel slashing prices and AMD launching new 32-core CPUs, things on the ordinary consumer desktop front have been relatively quiescent.
Well. They have until now. This is where the new CPUs come in, at least in theory.
For most of the last year, AMD had has a lead on Intel in terms of power consumption (though this varies somewhat based on chipset), total number of CPU cores, performance per dollar, and, in many workloads, absolute performance. Intel’s long pause on 14nm has made it progressively harder for the company to compete against AMD’s advancing microarchitecture and process node transitions.
Gaming is one of the last major category wins under Intel’s belt, though the company has maintained a strong position in creative applications like Adobe Creative Cloud as well. AMD and Intel have been generally tied at 4K since 2017, provided that you use settings that actually tax a GPU, but at 1080p Intel has maintained a modest invention. AMD’s 7nm Ryzen cut into Intel’s 1080p performance leadership, and the 10900K’s high clock speed (5.3GHz) is an effort to regain some of that leadership.
A photo released on April 30, 2020, shows a die from a 10th Gen Intel Core processor. (Source: Intel Corporation)
The question for Intel, however, is whether or not the 10900K can still squeeze meaningful performance improvements out of its 14nm node. Back in 2018, Intel managed to defeat AMD’s 1800X with a CPU that was packing two fewer cores, but the situation has changed since then. The 3900X is going to be the major challenge for Intel’s Core i9-10900K, and while the 10900K will have the higher standing clock speed, it lacks the additional core/thread count.
Generally speaking, we’d expect the Ryzen CPUs to dominate in rendering and multi-threaded application tests, but Intel to continue to lead in terms of raw gaming performance. At the same time, it’s clear that physics will not allow Intel to continue to ramp clock speeds in this fashion. The company has taken to shaving 300 microns of material off the top of its CPUs in order to improve their thermal transfer characteristics. When Intel is lapping its own die to improve thermal transfer, the company is bumping up against the fundamental limits of its own manufacturing capabilities.
New Generation, New Platform
With the launch of 10th Gen on desktop comes the inevitable need to migrate to a different CPU socket. This time around, Intel and various OEMs are straight-up promising that Z490 boards will be upgradeable to future Intel chips with support for features like PCIe 4.0. If you believe the rumors, this is Rocket Lake — Intel’s next-generation CPU microarchitecture with backported features intended for 10nm before that process node got stuck.
Thus, you’ll see a lot of Z490 motherboards advertising features like PCIe 4.0 support this generation. That doesn’t mean that Intel is supporting PCIe 4.0 now — just that board vendors are already advertising capabilities you can’t even enable yet.
I can believe that Intel needed a new CPU socket for Comet Lake / Rocket Lake, if only because I genuinely don’t think the company ever remotely expected to pack 10 cores into its desktop socket on 14nm. At the same time, AMD has been offering the better overall upgrade path.
The majority of X370 motherboards and every X470 motherboard is capable of stepping from an eight-core Ryzen 1xxx or 2xxx CPU up to a Ryzen 3000. AMD has just announced that it will support Zen 3 on X470 and B450 motherboards, though the path to unlocking that support will require some effort and understanding of the process to traverse. AMD’s AM4 support has not been perfect — not every X370 or B350 motherboard got upgraded to support Ryzen 3000 — but it’s been stronger than what Intel has offered. This has been a historic strength of AMD’s platforms as a whole, but it faded during the Bulldozer era when there wasn’t really anything to upgrade to. With Ryzen now in-market for several years, this advantage has emerged again.
Due to Circumstances Beyond Our Control…
My own plans to present a full set of power consumption data between Ryzen and Intel have been foiled by the untimely death of a 1250W PSU I’d been using to standardize all of my power consumption tests.
Topping that off, all of our game benchmarks are unaccountably slow. Our testing consistently puts the Core i9-10900K behind the Core i9-9900KS, 9900K, and 9700K. According to Intel and other reviewers we reached out to, these results are atypical and unexpected. A brand-new UEFI from Gigabyte for our Z490 Master motherboard did not solve the problem.
As you read this, I’ll be busily engaged in one of two endeavors: Retesting a fresh OS install on this motherboard or Testing a fresh OS install on an Asus motherboard. Either way, I’ll have a full, updated suite of game benchmarks available as soon as possible.
Power consumption tests… I admit, I have to figure out what I’m doing about those. I don’t have comparative data on any of my currently-alive test PSUs (I’m using my second backup PSU, a 750W Antec).
All of our benchmarks were run on a Gigabyte Aorus Z490 Master motherboard with 32GB of Crucial DDR4-3600 RAM. Windows 1909 was installed with all patches and updates, alongside Nvidia driver 445.87 with a 1TB Samsung 970 EVO installed.
Non-Gaming Benchmark Results
Our non-gaming benchmark results are presented below. Gaming tests TBA.
In the Blender 1.0Beta2 benchmark, the 10900K establishes what will quickly become a pattern. While it offers a solid performance improvement over the Core i9-9900K, 10 cores of Skylake-era 14nm aren’t enough to match 12 7nm Ryzen CPU cores. Officially, the 3900X is a $500 CPU, but Amazon currently coincidentally has the CPU for $409.
I’ve combined the Cinebench results because they point in more or less the same direction. The Core i9-10900K ekes out roughly 5 percent more single-thread performance and improves substantially on the Core i9-9900K’s multi-threaded scores. The gains here are coming from more aggressive clocking as well as the 1.25x improvement in core count between the two chips.
But while the Core i9-10900K’s performance gains are solid, they don’t match the Ryzen 9 3900X’s overall performance. In both cases, AMD holds the lead.
Handbrake 1.22 is a mixed bag for the Core i9-10900K. On the plus side, its performance in H.264 when performing a 1080p30 Fast encode on the Tears of Steel 4K film is excellent, winning past the Ryzen 9 3900X. H.265 performance, however, is slower than anticipated.
This H.265 result was odd enough that I actually switched to Handbrake 1.32 and ran the same encode test again. In this case, the Core i9-10900K took 6.43 minutes to encode the H.264 sample — significantly slower than in 1.22 — but 6.3 minutes in H.265.
Overall, the performance improvement in H.264 with 1.22 is better than the performance gain to H.265 in 1.32, but I’ll likely re-run this test along with gaming in the AM. It feels as though the Z490 motherboard platform could have used a little more time to bake.
Corona Render is an Intel-friendly application, and the Core i9-10900K’s performance reflects this, with the 10-core CPU coming in just five seconds behind the 12-core 3900X. It’s one of the strongest showings for the 10900K, but it isn’t a win.
Our MSVC 2019 Qt compile test hands the Core i9-10900K our second win (if you’re feeling generous) of the day against the Ryzen 9 3900X:
While the Ryzen 9 3950X retains the overall performance lead, the Core i9-10900K’s 10 cores win the day over the Ryzen 9 3900X — by the barest whisker.
To be added — but I’ve got no problem saying what I expect. I expect to see the Core i9-10900K beat its predecessors by a few frames per second at 1080p, but to match them at 4K, where game performance becomes GPU-bound. We test with an RTX 2080 instead of an RTX 2080 Ti, so our numbers are a bit more compressed than you might see with that card, but not to a degree that would make a difference (and an RTX 2080-equivalent GPU is not an unrealistic match for the Core i9-10900K).
Gaming is the highest-profile consumer category where Intel continues to command a performance lead, and it’s where the company has focused its CPU efforts. At the same time, the gap between Intel and AMD, even at 1080p, is modest at best. Gamers searching for the absolute highest frame rates will likely still play slightly faster on Intel systems, but the difference between the two is unlikely to be noticeable, even in competitive play.
Preliminary Conclusion: Skylake’s Swan Song
The Core i9-10900K is a step forward for Intel. At $488, it’s a considerably better buy than the Core i9-9900K, which was itself an excellent CPU. Its single-threaded performance is excellent and it’s capable of punching above its weight class on occasion. Skylake was an excellent CPU architecture in 2015 and it remains an excellent architecture in 2020.
And yet, for all these points — and for the first time, arguably, since Ryzen launched — Intel cannot claim to have reclaimed the overall pole position the way it could with the Core i7-8700K or Core i9-9900K when those parts debuted. I expect the Core i9-10900K to retain leadership in areas where Intel has been leading and to compete more effectively with the 3900X than its predecessor, but as far as matching or leading AMD’s 12-core CPU? On the whole, it doesn’t. And while neither Intel nor AMD have made promises about future motherboard support beyond the parts they plan to launch next, if you had to bet on which company would offer support for a wider range of CPU cores over a longer period of time, you’d bet on AMD.
The bottom line is this: The Core i9-10900K is a powerful, fast CPU, and an illustration of how little gas Skylake and Intel’s 14nm collectively have left in the tank. Rocket Lake, when and if it arrives, will supposedly give us new architectural improvements that may breathe some new life into the node, but the 10900K illustrates that Skylake has taken Intel as far as it can.
Comet Lake may paint a target on AMD’s Matisse, but it doesn’t topple its rival — and while it certainly improves Intel’s overall position, it doesn’t do so to the same degree as the 9900K and 8700K did when they arrived, relative to its smaller rival.
The Switch is the biggest hardware success for Nintendo in years, but the money it makes on the console is nothing compared with game sales. Naturally, Nintendo is quick to deploy its army of lawyers when someone threatens to undermine those sales. For example, if you’re running a ROM repository. Hacking collective Team-Xecuter is set to release a tool that can unlock the Switch to play homebrew and pirated games. Nintendo would very much like to stop that from happening, so it has filed a lawsuit.
You might know Team-Xecuter from its last major project, a USB dongle that can install the custom SX OS on Switch units from June 2018 and earlier. Those consoles have an older version of the Nvidia Tegra SoC with an exploitable flaw that Team-Xecuter and others have used to mod the software. Newer consoles have a patched chip that blocks such mods, but Team-Xecuter says its upcoming SX Core (standard Switch) and SX Lite (Switch Lite) kits will be able to get SX OS on even the upgraded models.
Unlike the USB dongle, the new kits require opening the console to solder in a small daughter board with its own SD card slot. Team-Xecuter has demoed the system a few times, and early testing and review units have already shipped out. The devices, which cost about $50, could go on sale in the coming weeks. Once installed, SX OS allows Switch owners to back up their content, play homebrew games, and yes, run pirated games.
Nintendo’s lawyers claim allowing the SX Core and SX Lite to go on sale would cause “astounding” damage to Nintendo’s business. The lawsuit points out the SX Core and SX Lite will make 35 million more Switch consoles hackable, which is in addition to the 20 million affected by the Tegra exploit. The community is divided on Team-Xecuter — while there are fans anxious to get the mod kits, others are uncomfortable with Team-Xecuter’s focus on piracy. Most teams creating mod tools for the Switch do so for the express purpose of backing up content and making homebrew games.
Pre-orders of the SX Core and SX Lite have been live at various retailers for weeks, but Nintendo seeks an injunction that will block any further sales. Nintendo also demands $2,500 in damages per sale, plus the seizure and destruction of all Team-Xecuter kits. No matter what happens, the genie is out of the bottle. Even if Nintendo stops Team-Xecuter, someone else will just clone the technology.
You’ve been stuck in the house for the better part of two months. If you’re a gamer, the odds are pretty strong that with all that time on your hands, you’ve already effectively smashed every game in your current library.
It’s well past time for some new challenges. And we’d suggest that with a wealth of gaming history to feed that appetite, you can satisfy that urge for a ridiculously low price. So we’ve assembled eight games, some new, some classic, yet all with some big discounts that should present some much-needed new challenges.
Star Wars…ever heard of it? On top of being part of one of the most popular brands in the world, these two games — Knights of the Old Republic and Knights of the Old Republic II — are two of the most honored games of the century so far. The industry showered awards on these prequels set 4,000 years before Luke, Leia and Han, in which a lone surviving Jedi seeks to restore order to a galaxy beset by evil Sith Lords. Best of all, you get both or under $5.
If you’ve been gaming in the past decade, you know the madcap action and zany characters of Borderlands. The Handsome Collection brings together two of the franchise’s biggest hits: Borderlands 2 and Borderlands: The Pre-Sequel, which allows players to hunt for treasure and other rare antiquities across the wild world of Pandora, happily blasting away as they go. Each game is a $30 value, now you can get both for under $10.
If strategy and world-building turns your gears more than mayhem and adrenaline, look no further than the granddaddy of classic decision-making gaming, Sid Meier’s Civilization. A 30-year gaming staple, these packages include the latest installments in the long-running series that tasks you to safely guide a world from prehistoric times through centuries of societal and technological advancements.
You can save $10 off the price of the 2019 Sid Meier’s Civilization VI: Gathering Storm standalone expansion; or get the best value by grabbing the entire Sid Meier’s Civilization VI: Platinum Edition epic, which not only includes Gathering Storm, but also the full Civilization VI, six DLC packs, and the Rise and Fall expansion module. With Sid Meier’s Civilization V, you can go back even further and get everything released for the series’ fifth installment, all for just $12.50.
This game is a trip as you explore the beautiful world of the Inverse, a place where civilizations, oceans and the basic laws of physics aren’t quite like our world. As visually impressive as it is engaging, you fly through this incredible world, trying to piece together its rich, action-packed history…and now, it’s 75 percent off.
In Lightmatter, you need to stay in the light — because the shadows themselves are deadly. Players have to be pretty ingenious to keep coming up with ways to generate light and escape the dark as they make their way through a dilapidated building to the center of the mystery of the Lightmatter power source.
The future of 2084 isn’t a great place in Observer. As war and decay rage around you, you play an Observer, a neurally-enhanced police detective who can actually hack the minds of criminals and victims. Of course, there’s more to those visions than meets the eye — and as reality unravels, you’ll start to see why this psychological horror tale was named to Game Informer’s Top 10 Cyberpunk Games of All-Time list. You can also try it for under $10.
Note: Terms and conditions apply. See the relevant retail sites for more information.For more great deals, go to our partners at TechBargains.com.
Last week, Sony announced its IMX500, the first image sensor with an onboard DSP specifically intended for AI processing. Today, it announced the next step in that process — partnering with Microsoft to provide an edge processing model.
The two firms signed an MOU (Memo of Understanding) last week to jointly develop new cloud solutions to support their respective game and content-streaming services, as well as potentially using Azure to host Sony’s offerings. Now, they’ve announced a more-specific partnership around the IMX500.
Microsoft will embed Azure AI capabilities into the IMX500, while Sony is responsible for creating a smart camera app “powered by Azure IoT and Cognitive Services.” The overall focus of the project appears to be on enterprise IoT customers, which fits with Microsoft’s overall focus on the business end of the augmented reality market. For example, the IMX500 might be deployed to track inventory on store shelves or detect industrial spills in realtime.
The Sony IMX500 (bare chip, left) and IMX501 (packaged model, right).
Sony is claiming that vendors will be able to develop their own AI and computer vision tools using the IMX500 and its associated software, raising the possibility of custom AI models built for specific purposes. Building those tools isn’t easy, even when starting with premade models, and it’s not clear how much additional performance or capability will be unlocked by integrating these capabilities directly into the image sensor. The video below has more details on the IMX500 itself:
In theory, the IMX500 could respond more quickly to simple queries than a standard camera. Sony is arguing that the IMX500 can apply image detection algorithms extremely quickly, at ~3.1ms, compared with hundreds of milliseconds to seconds for its competitors, which rely on sending traffic to cloud servers.
This is not to say that the IMX500 is a particularly complex AI processor. By all accounts, it’s actually mostly fit for smaller processing tasks, with relatively limited processing capabilities. But it’s a first step towards baking these kinds of functions into CV systems to allow for faster response times. In theory, robots might be able to function safely in closer quarters with humans (or perform more complex tasks) if they had better image processing algorithms that ran closer to the hardware and allowed machines to react more quickly.
It’s also interesting to see the further deepening of the Sony-Microsoft partnership. There’s no doubt that the two companies remain competitors in gaming, but outside of it, they’re getting downright chummy.
I’ve been impressed by AI’s ability to handle upscaling work in a lot of contexts, and self-driving cars continue to advance, but it isn’t clear when this kind of low-level edge processing integration will pay dividends for consumers. Companies that don’t make image sensors may continue to emphasize SoC-level processing techniques using onboard AI hardware engines rather than emphasizing how much of the workload can be shifted to the sensor. Baking AI capabilities into a camera sensor could also increase overall power consumption depending on how the chip functions, so that’ll undoubtedly also be a consideration for future product development.
There are no consumer applications or companies currently announced, but it’s a safe bet we’ll see the technology in ordinary hardware sooner rather than later, whether used for face detection or some type of augmented image processing.
For years, smartphone makers raced to include the most megapixels possible in their cameras, but more pixels doesn’t guarantee a good camera. After a brief respite, manufacturers are again trying to cram in more pixels. Devices like the Samsung Galaxy S20 Ultra and Motorola Edge+ have a 108MP primary sensor. Samsung’s latest ISOCELL GN1 sensor has fewer pixels, but it might take better photos thanks to improved autofocus and bigger pixels.
Like many aspects of the modern smartphone, camera performance is constrained by the size of the device. These camera sensors are small compared with what you can fit in a “real” camera. Cramming in more megapixels means each pixel on the sensor gets smaller and less sensitive to light. The megapixel race picked up again when manufacturers developed pixel binning technology that can rope together several smaller pixels to act like one larger pixel. So, your phone might have a 108MP sensor, but the photos are only 12MP.
The ISOCELL GN1 has 1.2μm pixels, which is on the large side for smartphones. It also supports Samsung’s Tetracell pixel-binning technology to produce larger effective pixels (2.4μm) and collect more light. Samsung says this doubles the sensor’s light sensitivity and produces 12.5MP photos.
Samsung launched the S20 Ultra earlier this year with the 108MP ISOCELL Bright HM1. While this sensor is more technically capable than the GN1, it hasn’t performed as well in practice as we’d hoped. The S20 Ultra was almost universally derided for sluggish autofocus, which used phase detect technology. With phase-detect, light splits upon entering the camera. The lens elements adjust to merge the two images, which means they’re in focus. This is harder to do with larger sensors, though.
The S20 Ultra’s giant camera assembly, courtesy of iFixit.
The ISOCELL GN1 should improve focus performance with the use of Dual Pixel autofocus. Dual Pixel technology analyzes each diode’s signal to determine focus and then combines those signals to create a sharp image. Dual Pixel technology specifically means there are two photodiodes in each pixel, for a total of 100 million phase-detection auto-focus (PDAF) agents.
The ISOCELL GN1 will go into mass production later this month, so it could appear in phones over the summer. Some will be from Samsung, but many won’t. While the GN1 sounds like a good alternative to the 108MP HM1, Samsung probably isn’t done trying to fix that sensor. We expect to see the HM1 make a return in the Galaxy Note 20.
If you’ve ever worked for a company, you’re probably aware that they tend to keep computers running after they should’ve been replaced with something newer, faster, and/or less buggy. Fujitsu Tokki Systems Ltd, however, takes that concept farther than most. The company still has a fully functional computer it installed back in 1959, the FACOM128B. Even more impressive, it still has an employee on staff whose job is to keep the machine in working order.
The FACOM128B is derived from the FACOM100, described as “Japan’s first practical relay-based automatic computer.” The 100, an intermediate predecessor known as the 128A, and the 128B were classified as electromechanical computers based on the same kind of relays that were typically used in telephone switches. Technologically, the FACOM 128B wasn’t particularly cutting-edge even when constructed; vacuum tube designs were already becoming popular by the mid-1950s. Most of the computers that used electromechanical relays were early efforts, like the Harvard Mark I (built in 1944), or one-off machines rather than commercialized designs.
Relay computers did have advantages, however, even in the mid-to-late 1950s. Relay computers were not as fast as vacuum-tube-powered machines, but they were significantly more reliable. Performance also appears to have continued to improve in these designs as well, though finding exact comparison figures for performance on early computers can be difficult. Software, as we understand the term today, barely existed in the 1950s. Not all computers were capable of storing programs, and computers were often custom-built for specific purposes as unique designs, with significant differences in basic parameters.
Wikipedia notes, however, that the Harvard Mark I was capable of “3 additions or subtractions in a second. A multiplication took 6 seconds, a division took 15.3 seconds, and a logarithm or a trigonometric function took over one minute.” The FACOM128B was faster than this, with 5-10 additions or subtractions per second. Division and multiplication were also significantly faster.
The man responsible for maintaining the FACOM128B, Tadao Hamada, believes that the work he does to keep the system running is a vital part of protecting Japan’s computing heritage and making sure future students can see functional examples of where we came from, not just collections of parts in a box. Hamada has pledged to maintain the system forever. A year ago, the FACOM128B was registered as “Essential Historical Materials for Science and Technology” by the Japanese National Museum of Nature and Science. The goal of the museum, according to Fujitsu, is “to select and preserve materials representing essential results in the development of science and technology, that are important to pass on to future generations, and that have had a remarkable impact on the shape of the Japanese economy, society, culture, and the lifestyles of its citizens.”
A video of the FACOM128B in-action can be seen below:
The FACOM128B was used to design camera lenses and the YS-11, the first and only post-war airliner to be wholly developed and manufactured in Japan until the Mitsubishi SpaceJet. While the YS-11 aircraft was not commercially successful, this wasn’t the result of poor computer modeling; the FACOM128B was considered to be a highly reliable computer. Fujitsu’s decision to keep the machine in working order was itself part of a larger program, begun in 2006. The company writes:
The Fujitsu Relay-type Computer Technology Inheritance Project began activities in October 2006, with the goal of conveying the thoughts and feelings of the technical personnel involved in its development and production to the next generation by continuing to operate the relay-type computer. In this project, the technical personnel involved in the design, production, maintenance, and operation of the computer worked with current technical personnel to keep both the FACOM128B, which is fast approaching its 60th anniversary, and its sister machine, the FACOM138A, in an operational state.
Hamada has been working on the electromechanical computer since the beginning of this program. He notes that in the beginning, he had to learn how to translate the diagrams the machine’s original operators had used. Asked why he believes maintaining the machine is so important, he stated: “If the computer does not work, it will become a mere ornament,” said Hamada. “What people feel and what they see are different among different individuals. The difference cannot be identified unless it is kept operational.”
It’s always interesting to revisit what’s been done with older hardware or off-the-wall computer projects, and I can actually see Hamada’s point. Sometimes, looking at older or different technology is a window into how a device functions. Other times, it gives you insight into the minds of the people that built the machine and the problems they were attempting to solve.
One of my favorite off-the-wall projects was the Megaprocessor back in 2016, a giant CPU you could actually see, with each individual block implemented in free-standing panels. Being able to see data being passed across a physical bus is an excellent way to visualize what’s happening inside a CPU core. While maintaining the FACOM128B doesn’t offer that kind of access, it does illustrate how computers worked when we were building them from very different materials and strategies than we use today.
Update (5/18/2020): Since we first ran this story, YouTuber CuriousMarc arranged for a visit to Fujitsu and an extensive discussion of the machine. You can see his full video below. It’s a bit lengthy, but it dives into the history of the system and Hamada himself.
A group of investors has sued Nvidia, alleging that they deliberately misled the wider market regarding the demand for GeForce products back in the cryptocurrency boom of 2017 – 2018. Back then, if you recall, GPU prices blew straight through the roof and stayed high for months.
According to these investors, Nvidia chose to deliberately misclassify revenue as being gaming-related when it knew otherwise. Nvidia factually made a number of statements indicating it believed cryptocurrency was a relatively small percentage of sales, with the absolute value of those sales represented in its “Crypto SKU” reporting.
The complaint alleges that Nvidia knew full well where its GPUs were going, and that it misled investors into thinking this revenue would continue into the future. The complaint details a series of reports supposedly detailing exactly how GPUs were being sold that were sent to CEO Jen-Hsun Huang on an ongoing basis.
If true, this would imply that Nvidia’s decision to dramatically raise prices with the Turing generation wasn’t an accident or a misread of the market, but a deliberate effort to upsell RTX as a permanent feature worth paying for.
The fact of the matter is, Nvidia hit the Turing launch with a lot of Pascal era cards on the market and had to clear them at the same time it was trying to launch its new card family. But part of what hurt it in the early days of Turing was its own decision to raise prices as part of the RTX debut.
That decision to raise prices has never made a ton of sense to me. Confining the cost increase to the RTX 2080 Ti would have worked just fine, but raising the RTX 2080 and 2070 prices to the degree Nvidia did effectively moved them up an entire price bracket. Gamers reacted by adopting Turing more slowly than Pascal. Nvidia later cut Turing prices when the AMD Radeon 5700 and 5700 XT launched, but there was always a question as to why they’d raised prices in the first place. If Nvidia misunderstood where its own revenue was coming from, it would make more sense for the company to have raised prices on its consumer GPUs.
Of course, there’s an alternative explanation: Nvidia may have known exactly where its revenue was coming from (as this complaint alleges), and have taken the opportunity to raise prices simply because it could. According to the complaint, data from GeForce Experience was used specifically to confirm how customers were using the GeForce cards they purchased. Supposedly, data gleaned from mining GFE showed that over 60 percent of GPU sales went to miners during the Class Period.
It seems unlikely that the crypto market wound down the way Jen-Hsun had expected — the glut of Pascal cards on the market at the end of 2018 and into 2019 were obviously a problem for Nvidia’s effort to raise Turing prices — but if the allegations in this complaint are true, Nvidia knew who it was selling GPUs to throughout the period.
When Epic released footage from its upcoming Unreal 5 tech demo last week, one major question was whether PC gamers would be able to enjoy equivalent performance or capabilities. As we discussed last week, any major boost to console performance that relies explicitly on storage hardware could require relatively fast hardware on the PC side of things.
Epic has now chimed in and confirmed our expectations. According to Epic Games China (via DSOGaming), the PS5 version of the game ran at 1440p at 30fps. A PC equipped with an RTX 2080 notebook and a Samsung 970 EVO is capable of running the demo at ~40fps.
And I can ensure that we can run this demo in our notebook, in editor , not cooked, it even can 40FPS. ( After someone in BBS confirm that the device is RTX2080 and 970EVO )
This is a fairly good performance data point. The Samsung 970 EVO isn’t a top-end NVMe SSD, but it’s not a bad performer, either. A 2018 midrange M.2 drive is significantly faster than the top-end SATA drives you could get a few years earlier, for example, simply thanks to higher transfer speeds.
The major piece of the puzzle we’d still like to know is whether the SSD or the GPU is more important to the general performance of the game. If a 970 EVO and an RTX 2080 notebook GPU are sufficient to hit 40fps, what happens if you switch to Intel’s highest-end Optane drive with the same GPU? Conversely, what happens if I drop the SSD down to a SATA model or an early M.2 drive with less absolute performance?
To be clear, I do not expect SSDs to suddenly become more important than GPUs or CPUs for gaming performance. At most, I suspect we’re going to see the emergence of a new impactor on gaming performance. GPU will remain the most important factor, followed by CPU, likely followed by SSDs.
It’s not that SSDs can’t impact gaming already — they absolutely do — but for the most part, the impacts are limited to save game loads and level transitions. Games that show sustained in-play performance benefits from SSDs tend to be those that rely the most heavily on streaming assets during play (Diablo III was one early title to benefit directly from an SSD in this fashion, as it reduced lag while exploring the world).
If you’re a gamer on an old magnetic hard drive, I wouldn’t panic and rush out to buy an SSD, but it’s probably time to start seriously eyeing an upgrade. It looks as though new games may demand more performance from your storage solution. As for what kind of solution, I’m not going to make specific predictions there, but any reasonable M.2 SSD from the past few years will undoubtedly be fine, and I’d be stunned if every type of SSD wasn’t ultimately supported.
Up until now, games have treated HDDs and SSDs as largely synonymous, though plenty of titles recommend solid state storage as part of their ideal system requirements. Question is, will we now see HDDs fall off the “Minimum” end of the spec sheet, or will it just be a case of SSDs giving a profoundly better experience?
With its fast Intel Core i7 processor, HP’s 15t laptop has excellent performance for everyday tasks. It’s also fairly inexpensive at the moment, marked down from $799.99 to just $579.99. If you have an extra $50, you can also upgrade the notebook with a 1080p display.
Built with one of AMD’s new hexa-core Ryzen 5 4500U processors, this laptop should have excellent performance for everyday tasks. It should also have decent power to run games with low settings, and it will likely boot quickly as well thanks to a 256GB NVMe SSD. Right now you can get this laptop from HP marked down from $769.99 to $699.99.
HP 15t Intel 10th-gen Core i7-10510U Quad-core 15.6″ Laptop for $559.99 (+$50 for 1080p Display) at HP (list price $799.99)
HP ENVY x360 AMD Ryzen 5 4500 6-core 15.6″ 1080p Touch Laptop for $699.99 at HP (list price $769.99)
HP OMEN 30L AMD Ryzen 5 3600 6-core Desktop with Radeon RX 5700XT for $999.99 at HP (list price $1199.99)
HP OMEN 25L AMD Ryzen 5 3600 6-core Desktop with Radeon RX 5500 for $799.99 at HP (list price $899.99)
HP All-in-One 24 Intel 10th-Gen Core i5-10201U Quad-core 24″ 1080p IPS AIO PC for $699.99 at HP (list price $799.99)
HP Pavilion 15z Touch AMD Ryzen 5 3500U Quad-core 15.6″ Laptop for $559.99 (+$60 for 1080p Display) at HP (list price $679.99)
HP ENVY TE01 Intel Core i5-9400 6-core Desktop for $549.99 at HP (list price $649.99)
HP ENVY 13 Intel 10th-gen Core i7-1065G7 Quad-core 13.3″ 4K UHD Laptop with 512GB SSD for $1099.99 at HP (list price $1299.99)
HP Pavilion TG01 Intel Core i5-9400 6-core Gaming Desktop with GTX 1650 for $699.99 at HP (list price $799.99)
HP OMEN 15 Intel Core i7-9750H + RTX 2070 Max-Q 15.6″ 1080p 144Hz Gaming Laptop with 16GB RAM, 512GB SSD for $1628.99 at HP (Select i7 9750H + RTX 2070, 15.6-inch 144 Hz display and use code: 10GAMERSPRING – list price $2009.99)
HP OMEN 15 Intel Core i7-9750H + RTX 2080 Max-Q 15.6″ 4K Gaming Laptop with 16GB RAM, 512GB SSD for $1808.99 at HP (Select i7 9750H + RTX 2080, 15.6-inch 4K display and use code: 10GAMERSPRING – list price $2209.99)
HP OMEN 15 Intel Core i9-9880H + RTX 2080 Max-Q 15.6″ 4K Gaming Laptop with 16GB RAM, 512GB SSD for $1988.99 at HP (Select i9 9880H + RTX 2080, 15.6-inch 4K display and use code: 10GAMERSPRING – list price $2409.99)
HP OMEN Obelisk 875 Intel Core i5-9600K 6-core Gaming Desktop with RTX 2080Ti for $1682.99 at HP (Select RTX 2080Ti and use code: 10GAMERSPRING – list price $2069.99)
Note: Terms and conditions apply. See the relevant retail sites for more information.For more great deals, go to our partners at TechBargains.com.
While COVID-19 and working from home has turned the entire American workforce and the flow of global business on its head, one thing remains unchanged: the will of cybercriminals to take advantage of a crisis.
Cybersecurity has taken on greater importance for virtually every business that’s switched to a work from home model, which means companies need to put a greater emphasis on utilities built on advanced security technologies.
PassCamp Password Manager is one of those companies poised for our current security demands, offering untouchable password protection resources with complete privacy and remarkable ease of use.
Launched in 2017, PassCamp was established as a password service dedicated to servicing teams, offering premium security while facilitating collaboration and communication between members.
PassCamp allows managers to build strong passwords with complete end-to-end encryption and zero-knowledge protocols, then share those passwords with team members easily. Managers can determine who gets access to what information and monitor and reset passwords with just a few simple commands. Under the PassCamp method, even PassCamp can’t access your information, assuring the tightest possible security to protect your business’s most vital information.
Users have full accountability and transparency, with updates logging password usage, including all edits and shares as well as a complete past history of password changes. Meanwhile, each team member can search and filter passwords based on who shared it with them, their title, or the date it was added.
Rather than shared spreadsheets or sending passwords in emails or messaging apps, PassCamp allows password access only to those you want to be included, all in an interface so user-friendly that any team member can log in and work easily.
If you had to pick two groups of people unlikely to be sending each other Christmas cards in the tech industry, you could do worse than picking Microsoft and the open-source community (we’ll assume you’re on some oddly themed episode of Family Feud, possibly featuring Steve Harvey dressed like a copy of Windows 95). Historically, the two groups have been on very opposite sides of computing, dating back to Steve Ballmer’s infamous comments on open source code back in 2001.
Ballmer memorably called Linux “a cancer that attaches itself in an intellectual property sense to everything it touches.” In an interview with MIT’s Computer Science and Artificial Intelligence Lab, Microsoft president Brad Smith said that Microsoft had been “on the wrong side of history when open-source exploded at the beginning of the century.”
Smith continued: “Today, Microsoft is the single largest contributor to open-source projects in the world when it comes to businesses. When we look at GitHub, we see it as the home for open-source development, and we see our responsibility as its steward to make it a secure, productive home.”
This radical shift in attitude isn’t just words, though many in the open-source community remain skeptical of Microsoft’s long-term intentions. The company will ship a full Linux kernel in the next Windows update and already has partnered with Canonical to bring Ubuntu to Windows 10 as well. The pivot to embrace open source has been one of the signature features of Nadella’s time as Microsoft’s CEO.
Microsoft president Brad Smith
The general thinking is that this shift in priorities reflects Microsoft’s perception of where its core business interests lie. When Microsoft was focused on building Windows and Office, it viewed Linux as more of a competitor. Nadella, however, has made it clear that he views Microsoft as a cloud-centric company in every respect. When your primary business is hosting other people’s workloads, projects, and infrastructure, it makes no sense at all to view Linux as a competitor rather than an operating system many of your clients will want to deploy or rely on for critical components of their own projects.
While there are genuine concerns about how Windows development is prioritized within Microsoft as a result of these changes, the company’s changing position relative to Linux reflects the changing nature of computing. Remaining focused solely on traditional enclaves like desktops and laptops would effectively result in Microsoft being left behind — relevant for the workflows that it runs on (PCs), but without much input into future developments. And Microsoft was on the wrong side of history, with respect to open source and overall Linux development. Linux may never have succeeded on the desktop, if success is defined as breaking Microsoft’s near-monopoly. But it succeeded spectacularly everywhere else across computing, including in servers and the cloud.
The Google Pixel 4 and 4 XL finally made good on a technology demo from 2015 with the introduction of Motion Sense gestures. The Soli radar chip technology lets you control some aspects of the phone simply by waving your hand, but a new leak claims Google will abandon this feature in the Pixel 5.
Google’s Pixel brand started strong in 2016, but each generation since has suffered from shrinking sales. The budget-oriented Pixel 3a has been the only bright spot in Google’s smartphone lineup, and it continues to overshadow the Pixel 4 and 4 XL. These phones launched in late 2019 with a rather sizeable bezel at a time when most device makers are trying to shave off unused space. Google needed that extra surface area for the new face unlock sensors and the Soli radar module.
Sources speaking to 9to5Google now say that Google’s troubles with the Pixel line have led it to scale back the feature set in its 2020 flagship. Soli is understandably on the chopping block.
Soli, which Google first demoed in 2015, tracks nearby objects and people. In the original demos, Google showed how Soli’s radar mapping could create virtual knobs and buttons, but the miniaturized version in the Pixel 4 is only good for a few things like controlling media playback and waking the screen as you pick the phone up.
Those are potentially useful features, but they don’t work reliably, and Google has been slow to add any additional capabilities. The feature is also disabled in some countries as Google couldn’t get regulatory approval for the radar technology. Running a radar sensor 24/7 also uses power that could otherwise support the phone’s mediocre battery life.
Google talked up the Pixel 4’s hands-free gesture operation early and often.
Removing Motion Sense from the Pixel 5 would be an embarrassment for Google, but it could also make the Pixel 5 a more successful phone than the Pixel 4. Google could cut the price of the Pixel 5 by dropping Soli, and the design could become more modern without so much space reserved for the radar module.
We’re still about five months out from the Pixel 5, but Google did start talking about the Pixel 4 several months early. We might know as soon as later this summer if Google is giving Soli the boot.
Developing a new vaccine usually takes years, but companies and governments around the world are racing to create one to neutralize the pandemic coronavirus. There is some hope for a relatively quick turnaround today. US-based biotech firm Moderna says its initial human vaccine trials have been a success with all patients developing antibodies against the virus.
As with any vaccine, the goal with coronavirus is to get the immune system to produce antibodies that can eliminate the virus. That’s what happens naturally when someone becomes infected with the virus and survives, but a vaccine saves you from all the dangerous symptoms of infection. Moderna, like other vaccine researchers, has focused its efforts on the Spike protein coronavirus uses to enter cells. Antibodies keyed to the spike protein will attach to the Spike and prevent it from attaching to the receptors on your cells. Moderna tested three different vaccine dosages across 45 study participants, and all of them have developed neutralizing antibodies.
Moderna’s approach to developing this vaccine relies on viral RNA rather than the finished virus particle. All current vaccines used in humans use all or part of the target virus to train the immune system, but an RNA vaccine is potentially faster to develop. RNA is the step between DNA and proteins, so allowing immune cells to take up this foreign DNA causes them to produce the viral proteins and prompt an immune response.
Here, Moderna has isolated a segment of genetic material called mRNA-1273 from the SARS-CoV-2 virus that codes for the Spike protein. It administered 25, 100, or 250 micrograms of mRNA-1273 to patients, and all of them developed antibody levels similar to people who recovered from COVID-19 infection. The only side effects were minor redness and injection site pain among those who received the highest dose. The next phase will test doses between 25 and 100 micrograms.
This is the first COVID-19 vaccine to make it through the first round of clinical trials. On its accelerated timeline, Moderna hopes to begin testing larger groups of people in the coming weeks. The next phase of clinical trials will start with 600 people, expanding to as many as 1,000 in July. The company and public health officials will need to evaluate all the data before making the vaccine publicly available. Best case, the vaccine could be available later this year or in early 2021.
AMD’s Ryzen Mobile CPUs hit the market to some fanfare earlier this year, delivering substantially improved overall performance over previous generation chips and very strong competition for Intel on the whole.
It’s been obvious that AMD would bring this APU family to desktop as well, but it looks like the CPU in question may be fairly aggressive. Rumors suggest that the Ryzen 4700G is an 8C/16T CPU with a base clock of 3.6GHz, a boost clock of 4.45GHz, 4MB of L2, 8MB of L3, and a 2.1GHz maximum GPU clock, all packed into a 65W TDP.
Could it be true? It could be. Grain of salt, etc. But a move like this isn’t crazy, relative to how AMD has been evolving the overall Ryzen product family.
Ever since Intel and AMD introduced on-die graphics, they’ve pursued very different strategies. Intel put on-die graphics on just about every CPU, outside of servers and the desktop HEDT family. The “KF” CPUs it now sells without graphics were only launched to improve yields during a critical CPU shortage. AMD, on the other hand, always reserved its on-die GPU for a limited number of CPUs. Ever since it launched Llano, AMD has pursued a two-tier strategy with a CPU-only desktop platform with a higher total number of cores as well as an APU strategy with fewer CPU cores but respectable on-die GPUs.
This rumor suggests that AMD could make graphics standard on all parts below the 16-core level. It’s not clear how much of a value-add this represents to modern users. APU graphics are unquestionably valuable for two reasons: You cannot lose access to a system simply because your GPU dies, and you can run multi-monitor configurations more readily if you have a built-in GPU. Relatively few people, however, find themselves in dire need of either capability on the regular.
Supposedly, the GPU onboard the Ryzen 7 4700G would be an 8-CU Vega chip with 512 cores in total, clocked at a blazing 2.1GHz. That’s incredibly fast for an onboard APU, even if clocking up that high can only make a limited amount of sense due to memory bandwidth limitations. The Ryzen 7 4700G will undoubtedly shine with high-speed memory — APUs always do — but the relatively high price of RAM as you move up the clock charts always makes this investment an uncertain proposition.
Depending on how AMD distributes and prices for the new Vega graphics core, these new APUs could be a noteworthy performance improvement on the old, especially for people who are primarily CPU-focused. AMD could also be planning a preemptive response to any changes Intel might make to its own desktop graphics with future CPU launches.
TSMC has announced it will build a cutting-edge semiconductor foundry in Arizona, with plans to bring the fab online at the 5nm node. There are questions about just how big the factory will be and how central it is to the company’s plans, but choosing to site a foundry in the US is a significant step in the economic relationship between the United States and the Taiwanese foundry company.
The details of the plan, however, are a little surprising. According to TechCrunch, the new fab will target 5nm production, won’t be ready until late 2023 / early 2024 at the earliest, and is apparently targeting just 20,000 wafer starts per month. That puts it at less than half the size of the GlobalFoundries New York factory, which can handle about 50,000 wafer starts per month.
It’s a little surprising to hear that TSMC is targeting the 5nm node for this foundry, though that could change over time. With foundries typically taking 3-4 years to build, companies typically do not target them for current nodes. If TSMC intended to make the US facility a leading-edge node, it would already be targeting 3nm for the factory.
It’s possible, however, that TSMC will either change its node target, or that we’re seeing the impact of fewer and fewer companies making the transition to leading-edge nodes. The stated reason why GlobalFoundries got out of the game several years ago is that the company didn’t believe there would be enough customers available at 5nm in order to make continuing to challenge the leading-edge worth it. As the number of companies eager to advance to the leading edge shrinks, it means (logically) there are going to be more companies left behind on older nodes. TSMC may also hope to entice companies down to nodes they aren’t currently using if it can continue to improve those processes and lower the cost of adoption, which could make 5nm a long-term node for the manufacturer.
Inside a TSMC foundry facility. The yellow light is required to ensure safe wafer processing.
Overall, this announcement comes at a complex time for TSMC. Both mainland China and the United States are critically important customers for the company, and the increasing tension between them can’t have made things easy for the foundry right now. The surprising thing about the announcement is that there’s no mention of Intel at all. This could reflect the fact that Intel is currently focused on ramping up production at Fab 42 and other facilities, or that the company didn’t find it advantageous to consider a foundry investment right now, but it’s a surprising outcome. Given President Trump’s heavy emphasis on “America first” and the fact that both companies were in talks with the Administration, I and a lot of other people unconsciously expected an Intel announcement, I think.
This isn’t TSMC’s first US foundry, but it’ll be much closer to leading-edge than the company’s other facility. The other fab, in Camas, Washington, builds primarily flash memory on the 350nm – 160nm node. Even if 5nm isn’t cutting-edge by the time this new facility opens, it’ll be much closer than anything we build for TSMC in the United States currently.
There are some other strange angles to this deal. As Junko Yoshida points out, TSMC’s total capacity is 13 million wafer starts per year, or over one million per month. No 20,000-wafer facility built on 5nm is going to make a dramatic difference there. Also, there’s no mention of where in Arizona this fab is actually going to be built — just an announcement that TSMC is building one. She thinks TSMC’s willingness to build a fab in the US may be part of why the DoD recently shifted its position on enforcing further sanctions against Huawei. This announcement may have precious little to do with actually building a foundry.
The facility is expected to employ roughly 1,600 people and is scheduled to break ground in 2021.
Memorial Day weekend is just about to start, and the holiday deals are already rolling in. Today you can grab one of Apple’s 2019 10.2-inch iPads from Amazon with a $79 discount. This makes Apple’s most affordable iPad even more so at just $249.99 for the 32GB version.
Apple’s 2019 iPad features the company’s A10 processor that first debuted in the iPhone 7 at the end of 2016. Although that makes this tablet’s hardware a few years old, you get get the performance of this former flagship with a large high-def screen and with 32GB of storage space at a surprisingly affordable price. Right now you can get one from Amazon marked down from $329.00 to $249.99.
Apple designed its stylish MacBook Pro with an ultra-thin aluminum body and a high-res 2,560×1,600 display. Equipped with a Core i5 processor, this system has decent performance for everyday tasks and should perform well for watching your favorite movies on the go. It also sports Intel’s Iris Plus Graphics 645 iGPU, giving it enough power to run some games with low settings. Right now you can get this system from Amazon marked down from $1,499.00 to $1,199.00.
Apple iPad 10.2″ 32GB WiFi Retina Tablet for $249.99 at Amazon (list price $329)
Every week, we offer you dozens of fantastic deals on stuff you either need, want, will need, or will want. But because it’s easy to miss something, we’ve pulled together some of the week’s best, giving you one last shot at these special discounts before they’re finally gone for good.
Improve your workspace
Since you’re probably still working out your optimal home workflow, a few of these items may help improve some of your aging tech or just make you a little more comfortable as you slave away.
Of course, the reason you’re home in the first place is this deadly virus outside. We all know personal protective equipment (PPE) is in short supply, so stock up on a 10-pack of KN95 masks ($49.99; originally $99.99). Not to be confused with the medical-grade N95 masks reserved for medical personnel, these sturdy faceguards will still block 99 percent of particles as you inhale and 70 percent of what you exhale.
Music’s never been more vital to keeping us all sane, so a pair of headphone options from Skullcandy could be just the thing. The Indy True Wireless Earbuds ($59.99; originally $83.99) delivers a truly wireless experience, a sound profile tuned to deliver crisp highs and warm lows without distortion, and up to 16 rechargeable hours of battery life.
The first 5G phones have launched over the past year, and you might even have one now that Qualcomm has pushed OEMs to include the next-generation tech on all flagship devices. However, you probably don’t have a quantum 5G phone. Yes, that’s a thing thanks to a partnership between South Korea’s SK Telecom and Samsung. It’s also, surprisingly, not as useless as it sounds.
The new Galaxy A Quantum is the first and only smartphone with a Quantum Random Number Generator (QRNG) inside. The device itself is based on the mid-range Galaxy A71 5G, which Samsung has already launched in numerous markets sans quantum technology.
The Galaxy A Quantum has a 6.7-inch OLED display with a hole-punch camera (32MP) and an in-display optical fingerprint sensor. The camera array features four sensors: a 64MP main, 12MP ultra-wide, 5MP macro, and 5MP depth sensor. Unlike most 5G phones, the A71 5G (and thus the Galaxy A Quantum) runs on a Samsung Exynos 980 chip with a Samsung 5G modem. It also sports 8GB of RAM and 128GB of storage.
According to Samsung, the Galaxy A Quantum is much more secure than other smartphones thanks to its QRNG hardware. This is completely separate from the SoC and other core hardware. It’s a tiny embedded chipset called the SKT IDQ S2Q000 that’s just 2.5mm square consisting of an LED and a CMOS sensor. The LED shines into the sensor to produce image noise, and the sensor interprets that as quantum randomness. These random noise patterns become the basis for truly random number strings. When paired with a compatible online service, SK Telecom says the connection is completely unhackable — although, that sounds like a challenge some hackers would relish. It only works with SK Telecom services, though.
The Galaxy A Quantum will go on sale May 22nd in South Korea for KRW 649,000. That’s about $530, which is a bit more than the A71 5G on which it is based. Samsung hasn’t talked about future plans for this technology, but it seems like a good addition to Samsung’s existing Knox security framework. The company could use it to secure connections to its various online services, which are bundled with every phone it sells. Even if it’s not actually very useful, Samsung might still continue using this chip because labeling its products “quantum” sounds cool.
There’s a rumor going around that the new OnePlus 8 camera can see through objects in a sort of x-ray effect. It can’t — that’s not what it does — but it does provide some slight visibility through a handful of otherwise-solid objects, thanks to what appears to be a built-in camera defect.
Whether or not you consider this a feature, therefore, may depend a great deal on what kind of problems it causes for your other images (if any). To see the feature in action, activate the “Photochrom” filter and point the camera at the right kind of object. Here’s an Apple TV, as one example:
Here’s what’s happening here. Cameras are typically designed to capture light in the same wavelengths that humans can see. There’s no reason they have to be, and there are wavelengths of light that cameras are capable of capturing but simply don’t, because doing so would introduce visual artifacts into the spectrum bands that humans can see. In this case, OnePlus’ camera is picking up infrared light that our eyes can’t normally see. The combination of a slightly opaque (in visible light) surface and the OnePlus 8’s slightly infrared-friendly camera can combine to create output we wouldn’t normally get. The result? An x-ray (or “x-ray” camera).
Your brain is capable of seeing colors you don’t normally process if handed the input to do so. Some years ago, we wrote about the case of a man who had the lenses of his eye replaced with artificial ones. As sometimes happens in these cases, the new artificial lenses allowed him to see deeper into the ultraviolet than is typical for humans. Tests with precise spectrographic measuring equipment confirmed it. When handed deeper UV light than we typically see, your brain is capable of mapping it to visual output, to some modest extent.
Back to the OnePlus 8. In this case, the camera that’s doing the sensing is a low-quality sensor that doesn’t take very good photos. AndroidCentral has dismissed the privacy risk for this reason, given that the “Photochrom” mode apparently degrades image quality further. The overall privacy risk is small, the company claims, though we can understand why folks might be leery given how easily footage finds its way online these days. This also is a problem OnePlus really should have caught in-factory.
It’s an interesting way to see inside some otherwise opaque hardware, but those who want a serious look at such things may be better off just hunting down a screwdriver. Cameras that can see in the infrared are cool, but the usefulness of this model is rather lacking.
Tesla’s lithium-ion battery technology is already the envy of the automotive industry, and the company may be moving even further into the lead soon. A new report from Reuters claims Tesla will begin deploying its new “million-mile” battery in late 2020 or early 2021. That’s not the official name, of course, but it’s a reference to how much longer the cells can operate before failing. That’s about twice the average lifespan of current lithium-ion batteries.
Tesla has been working with Chinese battery giant Contemporary Amperex Technology Co. Ltd (CATL). Reuters reports the first cars with the new battery will be released in China, which is a market in which Tesla wants to gain a foothold. The company may confirm the basics of the Reuters report in several weeks at its “Battery Day” for investors. That was supposed to happen in April, but Tesla delayed the event due to the coronavirus pandemic.
CEO Elon Musk has spent the last few years hiring battery experts, buying small firms, and forging partnerships with both universities and other companies to make this battery a reality. Most of what we know about the probable technology comes from the expertise of the new hires and studies released by Tesla’s partners. For example, Dalhousie University has detailed a manganese cobalt oxide (NMC) crystal structure for battery cathodes that can resist wear over time. Tesla has an exclusive licensing arrangement with the university. Reuters also says the new batteries will rely on low-cobalt components and chemical additives that reduce stress over time.
The Tesla Model S’s battery pack is that big flat area in the middle, protected by ‘ballistic-grade’ aluminum.
The new battery technology should reduce manufacturing costs dramatically. Some experts believe Tesla will drop below $100 per kilowatt-hour of battery capacity. That could mean cheaper vehicles that are more in line with comparable gasoline-powered cars. That’s sure to help the company become a major player in China, which is already the world’s leading consumer of electric vehicles. Tesla even opened a Gigafactory to manufacture batteries in Shanghai recently, becoming the first foreign automaker to own and operate its own factory in China.
While the new batteries will start in China, sources claim Tesla will roll them out to other regions, but that might not happen until there are improved versions with better capacity and stability. This comes as Tesla’s relationship with Panasonic is winding down. The Japanese battery giant has worked with Tesla at several US factories, but it’s planning to pull out of at least one project in the next few weeks. CATL could pick up the slack and deliver even better components.
Video gaming is basically a giant bag of tricks designed to simulate the real-life environments around us. As such, developers have had to create methods of simulating ideas like weapons fire and hit effects. There are two main methods for doing this — hitscanning and projectile ballistics.
Over at Gamasutra, Tristan Jung has taken a look at the two methods, comparing and contrasting their implementations and why developers use them. Hitscanning is based on raycasting, which fires a beam from the muzzle of the gun and measures if it strikes anything. If the engine determines that an object was struck, it reports a bullet impact.
This trick has been used in all sorts of ways. Want the ray to continue straight through the target you hit? Congratulations, you just invented Quake II’s railgun. Allow rays to bounce off walls, and you’ve created reflective shots or even the concept of a shrapnel hit (you can always have a ‘bullet’ do less damage if it has recorded a bounce first).
One way to tell if a game is using hitscan or not is to check the latency between pulling the trigger and hitting a target. Hitscan weapons hit instantly. They don’t tend to model factors like bullet drop or wind velocity. Developers can simulate these effects by using curved rays, but the ray won’t change direction once ‘fired.’
The other method of calculating bullet trajectories is to actually have projectile ballistics. In this system, bullets have mass, velocity, and a hitbox. This allows for a much more realistic modeling of real-world effects like gravity, wind, and friction. Games like Max Payne use this method, it’s what allows for the game’s ‘bullet time.’ While hitscanning is the technique used for games like Wolf3D, projectile ballistics is actually the older method of simulating an object. If you think about how the shotgun and chaingun work in Doom, you can tell they use hitscanning (with some pseudo-auto-aim in some cases when firing at a target higher or lower than you).
With hitscan weapons, visible bullets or tracers may well be ‘ghosts,’ and where they impact on the player model may not actually correspond to where the hit was registered. Some game engines use hybrid effects, where projectile ballistics are used to calculate the visual path, but hit-checks are performed with hitscan.
This is why I described video gaming as a bag of tricks earlier in this story. We start off with two simple concepts — one for projectile ballistics and one for hitscans. Once you start unpacking the way these systems are actually implemented, you find an entirely new set of tricks for implementing effects like shrapnel, overpenetration, wind, and gravity. If a game simulates the impact of wind and gravity on projectile ballistics, it means there’s another series of methods for approximating those effects.
In some cases, diving down to this level of detail in a game engine means you’ve essentially arrived at a nuance that’s perfectly valid to explore for its own sake, but that most people don’t really care about. That’s not true when it comes to weapon ballistics. How guns handle in-game is part and parcel of the overall experience, and the method for checking hits can matter a great deal.
Jung goes into more detail on how projectile ballistics is implemented in engines, so check the article for the full details there. Some games implement both methods, which is a feature I’ve always liked. Hitscan tends to work very well for laser or various sci-fi weapons with near-instant hits. Projectile ballistics works well for objects that would take a slower amount of time to reach the target — and allows for the effective modeling of things like overpenetration or shrapnel from a frag grenade.
We’ve always known that online ads ate some degree of battery life and that misbehaving ads can have a disproportionate impact on the end-user experience, but a recent Chrome blog post drives home just how big the tail on the problem really is.
According to Google’s research, it’s a tiny fraction of ads that ruin everything for everybody else. 0.3 percent of ads account for 27 percent of all of the network bandwidth and 28 percent of the total CPU usage Google measured.
Google’s tests on resource consumption
Moving forward, the company is going to curtail how much processing power advertisements will be able to access. According to Google, it didn’t set the filters to run with a light touch — the overwhelming majority of existing web ads comply with the standards the company has set. Ads may not transfer more than 4MB of data or use more than 15 seconds of CPU time within any given 30 second period, and may not use more than 60 seconds of CPU time in total.
This is another in a series of fairly conspicuous user-friendly changes Google has introduced to Chrome in the recent past. Last year, Google began automatically blocking disruptive ads. Back in November 2018, it took action to demonetize sites that ran “consistently deceptive” ads in an effort to protect users from sites that trap them in an endless series of clickjacking redirects.
Google intends to experiment with these changes and then slipstream them into stable Chrome by the end of August. The goal is to give tool creators and advertisers enough time to prepare for the debut and to tweak their own advertisements to avoid falling under Google’s bar. From the sound of things, very little legitimate web traffic may be impacted, and the web traffic that has to be updated could likely use it in any case.
Getting a robotic explorer to another world is an amazing accomplishment, but it won’t be worth much if the mission comes to an early end because it gets stuck in the soil. NASA has partnered with the Georgia Institute of Technology to investigate ways a rover might be able to cope with crumbling or loose materials. The school constructed a prototype vehicle called Mini Rover with nifty “wiggling” wheels.
Sending robots to explore a place like Mars has many advantages, not the least of which is robots are much more hearty than humans. However, controlling a machine from several light-minutes away is tricky. Operators can use images from the rover to plan courses, but they can only guess about the consistency of the surface. Attempting to climb a hill with loose gravel could be disastrous when there’s no one in millions of miles who can right a flipped rover. Even seemingly safe flat surfaces can be dangerous — the Spirit rover met its fate after becoming stuck in a patch of loose sand. A better understanding of a branch of physics called terradynamics could help avoid this.
According to Georgia Tech physicist Dan Goldman, a rover wheel with higher degrees of freedom can help a robot cope with almost any slippage as it navigates across alien landscapes. The Mini Rover uses a wheel maneuver the team has dubbed “Rear Rotator Pedaling.” The front wheels continuously push material back toward the rear wheels, which creates a slope less steep than the real slope. The rear wheels “pedal” to walk up the gentler slope. Thus, the rover creates multiple small hills to get over a large one. If the rover sinks into loose material, the same pedaling motion can pull the wheels free and inch them forward.
Georgia Tech built the prototype in collaboration with Johnson Space Center using 3D printing and commercial off-the-shelf parts. Since it’s easily repaired, the team was able to subject it to harsh conditions without fear of ruining it. This also allowed researchers to test types of locomotion that could never have been tested on a full-sized rover developed for a real mission. They found the careful pedaling movement was the most helpful, and in general, going slow is the best approach.
NASA could choose to integrate similar wheel designs on future rovers, but construction work has already wrapped up on NASA’s next major rover. The Perseverance rover will launch this summer and land on Mars in early 2021. It will have a helicopter drone that may help to map out its course, but an unexpected sand trap could still prove problematic.
The Epic Game Store has been giving games away for several years now, but this is the first time a giveaway has brought the store down that I can remember.
Epic has been working its way up to an unannounced free game drop at 11 AM this morning, but a leak last night confirmed GTA V was on the way. By 11:05 AM, the website had swamped as people flooded the service to snap up the title. Epic has acknowledged this in a tweet about the situation:
We are currently experiencing high traffic on the Epic Games Store.
We are aware that users may be encountering slow loading times, 500 errors, or launcher crashing at this time and we are actively working to scale. We'll provide an update as soon as we can.
Which is a little odd, if you think about it. GTA V turns seven this year. It’s been released on multiple platforms. Most people own it already, if they want to own it at all. The surge in traffic for the Epic Game Store has apparently been intense enough that it has actually created issues for related Epic services like Epic Battle Breakers and Fortnite.
If you can’t get on the EGS to pick up your copy of the game, don’t worry — it’s going to be free through May 21. Despite the controversy between EGS and Steam, evidently enough gamers are willing to adopt the EGS to crash it when they flood the service looking for free titles. And while people might rationalize the download as a way to get a legitimately AAA title for free, from Epic’s perspective, once it has a spot on your SSD, it’s got a spot on your SSD. No matter what happens afterward, you at least installed the Epic Game Store once.
It isn’t clear which edition of the game Epic is giving away, however, because nobody can log in to check. Rumors ran wild on this point, with some implying Epic would give away the “latest premium edition with additional content.” At the very least, the rumor is that this represents the complete title, not just a front-end for accessing either the single-player campaign or GTA Online.
Epic, one has to note, is having something of a banner week. First, Tim Sweeney’s company wowed the internet with the new PS5 demo built on Unreal Engine 5. Now the EGS has broken down under the weight of Grand Theft Audio, which puts Epic news front-and-center before PC gamers who might not have cared about the console announcement. Granted, one of those events is a bit more prestigious than the other, but the company still managed to rank in both on consecutive days.
The numbers are staggering — and only seem to get bigger. The global video game market is worth $90 billion this year, an increase of almost 15 percent over totals just three years ago. The growth rate rises to almost 25 percent over that timespan when you focus solely on mobile gaming. In fact, the average game priced at $10 on Stream can generate about $10,000 in revenue in its first month.
For the past decade, AppGameKit has become one of the most popular platforms for both first time and advanced game developers to assemble cool games quickly for deployment on a wide variety of available platforms. This three-course, asset-heavy collection explores how AppGameKit works, presents a roadmap for getting a game started and offers a healthy stockpile of assets to help give your game some added heft to compete with multi-million dollar releases from the world’s biggest game makers.
AppGameKit: Easy Game Development Platform is your introduction to the AppGameKit environment, providing the framework for building games, even if you have little to no direct coding experience. This training examines what it takes to get started and how to generate games for all platforms from Android and iOS to Windows, Linux or even the Raspberry Pi.
Part of AppGameKit’s appeal is its ease of use — and with AppGameKit: Visual Editor, users learn to easily design game scenes with the streamlined drag and drop tool that also allows users to position, scale and rotate objects in your builds to meet any need. They’ll also get the complete 411 on virtual reality gaming with AppGameKit VR, including how to weave in leap motion-support, headset detection and more to adapt your game into a full-scale VR experience.
In addition to the training, users also get valuable insight (and full source code) in a pair of game packs with 28 games, an asset pack with more than 500 royalty-free sprites and sprite construction sets to create over 2,000 unique sprites of your own; and a 3D asset pack with more than 250 low-polygon 3D models.
All this AppGameKit training and support would usually cost about $200, but with the current deal, it’s all available now for only $29.99.
Note: Terms and conditions apply. See the relevant retail sites for more information.For more great deals, go to our partners at TechBargains.com.
As we hear more about the PlayStation 5 and Xbox Series X, it’s become clear that storage performance is where both Microsoft and Sony are hanging their hats with this generation of console. While the new systems offer very real gains in both CPU and GPU performance, the vendors are hammering the storage I/O and seek time improvements over and over.
According to Epic Games CEO Tim Sweeney, what the PS5 offers is beyond what’s available on a top-end PC. “[The PS5] has an immense amount of GPU power, but also multi-order bandwidth increase in storage management,” he said. “We’ve been working super close with Sony for quite a long time on storage. The storage architecture on the PS5 is far ahead of anything you can buy on the PC for any amount of money right now. It’s going to help drive future PCs.”
He could very well be right. Sony has shown more information on this point than Microsoft, but both companies have emphasized custom silicon baked into their upcoming consoles, specifically intended to allow the CPU to be fully devoted to gaming. This makes sense, from an architectural perspective — if you can’t deliver more performance through higher and higher clock speeds, deliver higher performance by making more efficient use of the CPU. PC’s don’t have an analogous function block.
Sweeney’s comments, if true, are actually a very good thing for the PC gaming market. Chatter about the demo has dominated discussion the past day or so, but if you haven’t seen it yet you can catch it here:
Why Consoles Potentially Leaping Over PCs Is Good for PC Gaming
Sweeney is quite clear that he isn’t just talking about what the average PC user has access to, even at the top end of the market. Assume for a moment that the statements are accurate. It’s been true before. The first programmable GPU introduced to market was the Xbox 360, not the Nvidia G80.
Because console and PC games are typically developed in tandem using the same engine, it’s the capabilities of consoles and low-end gaming PCs that collectively set the minimum bar for performance and quality. That’s effectively meant assumptions about storage that were tied to magnetic hard drives and the performance they could offer.
Microsoft and Sony are undoubtedly polishing at least a few mundane technologies as more exotic advances than they are, but they aren’t wrong in the main. We’ve never seen this kind of dedicated NAND cache devoted specifically to gaming, and the performance advantages of doing so could be considerable.
Yanking up the minimum bar sounds great until you consider that PCs don’t have dedicated NAND caches either, at which point it starts to sound like a bad idea again. What I suspect will actually happen here, assuming the issue presents as a problem in the first place, is that we’ll start seeing SSDs showing up as minimum spec requirements for gaming.
I’m not going to dismiss the possibility that Sony or Microsoft could introduce unique game capabilities on consoles that PCs currently can’t match, but I don’t think it’s likely. PCs might need higher RAM requirements or more VRAM to compensate for differences in underlying storage architectures, but it’s unlikely we will see them outstripped.
What’s good about this, though, is that we may start to see developers taking real advantage of the storage medium on PCs as well. Shaking the last mechanical spinners out of AAA gaming will ultimately be good for the entire medium. Even if SATA SSD-equipped PCs actually became the low point for performance over the next console generation, the benefits of making SSDs the default storage solution over HDDs would be enormous.
I admit, I’m making some intrinsic assumptions here. I see three basic possibilities:
1). The PS5 and Xbox Series X’s new storage cache technology is transformative and impacts gaming in ways PCs can’t match. Consoles lead in gaming quality for at least the first few years after launch. PC enthusiasts loudly demand equivalent features, leading to some cutting-edge enthusiast support for capabilities like NVDIMM or Optane DC PM.
2). The PS5 and Xbox’s new storage cache technology is very beneficial, but developers find ways to deliver equivalent or near-equivalent performance on PCs. The end result is still significantly faster games and better load times for all gamers, though conventional HDDs are less supported now.
3). The PS5 and Xbox’s new storage cache technology doesn’t improve gaming at all (and/or) PC developers refuse to adapt the technology for PCs. PC gaming remains fundamentally limited to the performance capabilities of a 5400 RPM HDD. Console owners all receive a free box of sunshine.
To argue option #3, you’ve got to basically believe both console manufacturers decided to double down on storage performance for no particular reason. The jump from a conventional HDD to a SATA SSD is perceptually larger than the change from a SATA drive to a PCIe 4.0 M.2 drive. The implication here is that we’re about to see consoles take a major leap in storage performance in ways that will encourage developers to optimize games for a storage medium we’ve already been taking advantage of for years.
If Microsoft and Sony both opted to spend significant amounts of silicon building specialized hardware decompression blocks that PCs don’t have, it stands to reason that they ought to be better at these tasks than a standard PC would. That doesn’t mean PC gaming won’t be the substantial beneficiary of these optimizations. And all of this assumes that the PS5 really is dramatically faster than a PC in a way that has an impact on gameplay in the first place.
The US Senate had a chance yesterday to protect your online privacy, and you’ll never guess what happened. Okay, maybe you will. They didn’t do it. The effort to rein in government powers granted by the Patriot Act fell short by a single vote, 59 to 37. Even though most senators agreed with the move, the upper chamber lacked the 60 votes necessary to overcome the filibuster. Four senators, including several who supported the measure, were not present to get it over the top.
The US famously passed the Patriot Act in the wake of the 9/11 terrorist attacks, granting sweeping powers to law enforcement to collect data on Americans. Congress has been wrestling with reauthorizing the Patriot Act over the last few months, and many elected officials have been pushing for limits in Section 215. That’s the part of the Patriot Act that allows the government to request access to data via secret tribunals authorized under the Foreign Intelligence Surveillance Act (FISA).
The Senate was set to authorize an amendment that would have forbidden the government from collecting your browsing history via FISA courts. Instead, it would need a warrant. Majority Leader Mitch McConnel and most Republicans were opposed to any amendments that would weaken the Patriot Act, but in the end, 24 Republicans broke ranks to vote in favor of the amendment. All but 10 Democrats also voted in favor. However, that wasn’t enough to reach the 60-vote threshold.
Senators Lamar Alexander, Patty Murray, Ben Sasse, and Bernie Sanders were all absent. Sasse and Alexander are Republicans, but Democrats Murray and Sanders were confirmed “yes” votes. Murray’s office claims she was still flying back to Washington DC when the vote happened. Sanders’ camp has not responded to requests for comment.
The likely reauthorization without any amendments will be a major setback for privacy advocates. FISA courts have long been abused to gather data on millions of Americans. Congress made a half-hearted effort to increase oversight with the 2015 USA Freedom Act, a response to the outcry over Edward Snowden’s leak of classified documents. However, the Justice Department’s inspector general has warned FISA courts are still operating basically unhindered, and the Freedom Act has expired.
Warrantless collection of online browsing data will most likely continue under the reauthorized Patriot Act. Your best bet to maintain some semblance of privacy is to ensure you’re always using SSL-enabled websites. You might also want to consider a trustworthy VPN to obscure your activities.
In lieu of the multi-day extravaganza that is normally Nvidia’s flagship GTC in San Jose, the company has been rolling out a series of talks and announcements online. Even the keynote has gone virtual, with Jensen’s popular and traditionally rambling talk being shifted to YouTube. To be honest, it’s actually easier to cover keynotes from a livestream in an office anyway, although I do miss all the hands-on demos and socializing that goes with the in-person conference.
In any case, this year’s event featured an impressive suite of announcements around Nividia’s new Ampere architecture for both the data center and AI on the edge, beginning with the A100 Ampere-architecture GPU.
Nvidia A100: World’s Largest 7nm Chip Features 54 Billion Transistors
Nvidia’s first Ampere-based GPU, its new A100 is also the world’s largest and most complex 7nm chip, featuring a staggering 54 billion transistors. Nvidia claims performance gains of up to 20x over previous Volta models. The A100 isn’t just for AI, as Nvidia believes it is an ideal GPGPU device for applications including data analytics, scientific computing, and cloud graphics. For lighter-weight tasks like inferencing, a single A100 can be partitioned in up to seven slices to run multiple loads in parallel. Conversely, NVLink allows multiple A100s to be tightly coupled.
All the top cloud vendors have said they plan to support the A100, including Google, Amazon, Microsoft, and Baidu. Microsoft is already planning to push the envelope of its Turing Natural Language Generation by moving to A100s for training.
Innovative TF32 Aims to Optimize AI Performance
Along with the A100, Nvidia is rolling out a new type of single-precision floating-point — TF32 — for the A100’s Tensor cores. It is a hybrid of FP16 and FP32 that aims to keep some of the performance benefits of moving to FP16 without losing as much precision. The A100’s new cores will also directly support FP64, making them increasingly useful for a variety of HPC applications. Along with a new data format, the A100 also supports sparse matrices, so that AI networks that contain many un-important nodes can be more efficiently represented.
Nvidia DGX A100: 5 PetaFLOPS in a Single Node
Along with the A100, Nvidia announced its newest data center computer, the DGX A100, a major upgrade to its current DGX models. The first DGX A100 is already in use at the US Department of Energy’s Argonne National Lab to help with COVID-19 research. Each DGX A100 features 8 A100 GPUs, providing 156 TFLOPS of FP64 performance and 320GB of GPU memory. It’s priced starting at “only” (their words) $199,000. Mellanox interconnects allow for multiple GPU deployments, but a single DGX A100 can also be partitioned in up to 56 instances to allow for running a number of smaller workloads.
In addition to its own DGX A100, Nvidia expects a number of its traditional partners, including Atos, Supermicro, and Dell, to build the A100 into their own servers. To assist in that effort, Nvidia is also selling the HGX A100 data center accelerator.
Nvidia HGX A100 Hyperscale Data Center Accelerator
The HGX A100 includes the underlying building blocks of the DGX A100 supercomputer in a form factor suitable for cloud deployment. Nvidia makes some very impressive claims for the price-performance and power efficiency gains that its cloud partners can expect from moving to the new architecture. Specifically, with today’s DGX-1 Systems Nvidia says a typical cloud cluster includes 50 DGX-1 units for training, 600 CPUs for inference, costs $11 million, occupies 25 racks, and draws 630 kW of power. With Ampere and the DGX A100, Nvidia says only one kind of computer is needed, and a lot less of them: 5 DGX A100 units for both training and inference at a cost of $1 million, occupying 1 rack, and consuming only 28 kW of power.
DGX A100 SuperPOD
Of course, if you’re a hyperscale compute center, you can never have enough processor power. So Nvidia has created a SuperPOD from 140 DGX A100 systems, 170 InfiniBand switches, 280 TB/s network fabric (using 15km of optical cable), and 4PB of flash storage. All that hardware delivers over 700 petaflops of AI performance and was built by Nvidia in under three weeks to use for its own internal research. If you have the space and the money, Nvidia has released the reference architecture for its SuperPOD, so you can build your own. Joel and I think it sounds like the makings of a great DIY article. It should be able to run his Deep Space Nine upscaling project in about a minute.
Nvidia Expands Its SaturnV Supercomputer
Of course, Nvidia has also greatly expanded its SaturnV supercomputer to take advantage of Ampere. SaturnV was composed of 1800 DGX-1 Systems, but Nividia has now added 4 DGX A100 SuperPODs, bringing SaturnV to a total capacity of 4.6 exaflops. According to Nvidia, that makes it the fastest AI supercomputer in the world.
For AI on the Edge, Nvidia’s new EGX A100 offers massive compute along with local and network security
Jetson EGX A100 Takes the A100 to the Edge
Ampere and the A100 aren’t confined to the data center. Nvidia also announced a high-powered, purpose-built GPU for edge computing. The Jetson EGX A100 is built around an A100, but also includes Mellanox CX6 DX high-performance connectivity that’s secured using a line speed crypto engine. The GPU also includes support for encrypted models to help protect an OEM’s intellectual property. Updates to Nvidia’s Jetson-based toolkits for various industries (including Clara, Jarvis, Aerial, Isaac, and Metropolis) will help OEMs build robots, medical devices, and a variety of other high-end products using the EGX A100.
Today Nvidia officially launched its most powerful card-sized IoT GPU ever, the Nvidia Jetson Xavier NX (dev kit $399). We covered the basics of the Xavier NX and its industry-leading MLPerf stats when it was announced in November, but since then we’ve had a chance to get our hands on an early version of the device and dev kit and do some real work on them. Along with the dev kit, Nvidia also introduced cloud-native deployment for Jetson using docker containers, which we also had a chance to try out.
Nvidia Jetson Xavier NX by the Numbers
Built on its Volta architecture, the Jetson Xavier NX is a massive performance upgrade compared with the TX2 and becomes a bigger-sibling to the Jetson Nano. It features 384 CUDA cores, 48 Tensor cores, and 2 Nvidia Deep Learning Accelerators (DLA) engines. Nvidia rates it for 21 Trillion Operations per Second (TOPS) for deep learning performance. Along with the GPU is a reasonably-capable 6-core Nvidia Carmel ARM 64-bit CPU with 6MB of L2 and 4MB of L3 cache. The processor also includes 8GB of 128-bit LPDDR4x RAM with 51.8GB/s bandwidth.
All that fits in a module the size of a credit card that consumes 15 watts — or 10 watts in a power-limited mode. As with earlier Jetson products, the Xavier NX runs Nvidia’s deep-learning software stack, including advanced analytic systems like DeepStream. For connectivity, the developer kit version includes a microSD slot for the OS and applications, as well as 2 MIPI camera connectors, Gigabit Ethernet, M.2 Key E with Wi-Fi/Bluetooth, and an open M.2 Key M for an optional NVMe SSD. Both an HDMI and DisplayPort connector are provided, along with 4 USB 3.1 and 1 USB 2 micro-USB port.
Cloud-Native Deployment Thanks to Docker Containers
It’s one thing to come up with a great industrial or service robot product, but another to keep it up to date and competitive over time. As new technologies emerge, or requirements evolve, update and software maintenance are a major issue. With Xavier NX, Nvidia is also launching its “cloud native” architecture as an option for deploying embedded systems. Now, I’m not personally a fan of slapping “cloud-native” onto technologies just because it is a buzzword. But in this case, at least the benefits of the underlying feature set are clear.
Basically, individual applications and services can be packaged as Docker containers and individually distributed and updated via the cloud. Nvidia sent us a pre-configured SSD loaded with demos, but I was also able to successfully re-format it and download all the relevant Docker containers with just a few commands, which was pretty slick.
Putting the Xavier NX Through Its Paces
Nvidia put together an impressive set of demos as part of the Xavier NX review units. The most sophisticated of them loads a set of docker containers that demonstrate the variety of applications that might be running on an advanced service robot. That includes recognizing people in four HD camera streams, doing full-body pose detection for nearby people in another stream, gaze detection for someone facing the robot, and natural language processing using one of the BERT family of models and a custom corpus of topics and answers.
Nvidia took pains to point out that the demo models have not been optimized for either performance or memory requirements, but aside from requiring some additional SSD space, they still all ran fairly seamlessly on a Xavier NX that I’d set to 15-watt / 6-core mode. To help mimic a real workday, I left the demo running for 8 hours and the system didn’t overheat or crash. Very impressive for a credit-card-sized GPU!
Running multiple Docker container-based demos on the Nvidia Jetson Xavier NX.
The demo uses canned videos, as otherwise, it’d be very hard to recreate in a review. But based on my experience with its smaller sibling, the Jetson Nano, it should be pretty easy to replicate with a combination of directly-attached camera modules, USB cameras, and cameras streaming over the internet. Third-party support during the review period is pretty tricky, as the product was still under NDA. I’m hoping that once it is out I’ll be able to attach a RealSense camera that reports depth along with video, and perhaps write a demo app that shows how far apart the people in a scene are from each other.
Developing for the Jetson Xavier NX
Being ExtremeTech, we had to push past the demos for some coding. Fortunately, I had just the project. I foolishly agreed to help my colleague Joel with his magnum opus project of creating better renderings of various Star Trek series. My task was to come up with an AI-based video upscaler that we could train on known good and poor versions of some episodes and then use it to re-render the others. So in parallel to getting on setup on my desktop using my Nvidia 1080, I decided to see what would happen if I worked on the Xavier NX.
Nvidia makes development — especially video and AI development — deceptively easy on its Jetson devices. Its JetPack toolset comes with a lot of AI frameworks pre-loaded, and Nvidia’s excellent developer support sites offer downloadable packages for many others. There is also plenty of tutorial content for local development, remote development, and cross-compiling. The deceptive bit is that you get so comfortable that you just about forget that you’re developing on an ARM CPU.
At least until you stumble across a library or module that only runs on x86. That happened to me with my first choice of super-resolution frameworks, an advanced GAN-based approach, mmsr. Mmsr itself is written in Python, which is always encouraging as far as being cross-platform, but it relies on a tricked-out deformation module that I couldn’t get to build on the Jetson. I backed off to an older, simpler, CNN-based scaler, SRCNN, which I was able to get running. Training speed was only a fraction of my 1080, but that’s to be expected. Once I get everything working, the Xavier NX should be a great solution for actually grinding away on the inference-based task of doing the scaling.
Is a Xavier NX Coming to a Robot Near You?
In short, probably. To put it in perspective, the highly-capable Skydio autonomous drone uses the older TX2 board to navigate obstacles and follow subjects in real time. The Xavier NX provides many times (around 10x in pure TOPS numbers) the performance in an even smaller form factor. It’s also a great option for DIY home video applications or hobby robot projects.
One of the long-term ramifications of the tensions between the US and China has been an acceleration of China’s efforts to move its government and corporate users towards natively-produced semiconductors. China famously imports more semiconductors than oil, and the country has been pushing to change this. This week, the Semiconductor Manufacturing International Corp (SMIC) announced it had begun commercial mass production of the Kirin 710A for Huawei on its 14nm FinFET process. It’s the first time Huawei has built hardware with a foundry other than TSMC, and it’s a major milestone for SMIC as well.
Neither the Kirin 710 or SMIC’s 14nm FinFET process are particularly noteworthy in and of themselves. The Kirin 710 dates to mid-2018 and combines four Cortex-A73 CPU cores on the same die with four Cortex-A53 cores. The first company to ship 14nm silicon was Intel, which shipped the node in 2014.
A TSMC FinFET, up close and personal. Different foundries build their fins with somewhat different shapes and implementations.
The significance of SMIC shipping a Huawei SoC is that a mainland Chinese foundry is shipping the chip at all. I’m willing to grant that we’ve seen no silicon and don’t know the characteristics of SMIC’s 14nm node. Since there is no standard-setting body that defines what “14nm” is, it’s certainly possible that SMIC’s 14nm FinFET node might look more like Intel’s first-generation 22nm node in certain respects. None of that really matters. It’s still a major achievement for SMIC to be shipping a part this close to the leading edge.
SMIC, as far as I’m aware, is the only foundry even talking about pushing into leading-edge lithography. We often talk about how the number of foundries at the leading edge has shrunk with each generation, down to just three at 7/10nm — Intel, Samsung, and TSMC. SMIC isn’t ready to talk about 7nm just yet, but it already has a process it calls “N+1” heading into production, with power reductions of up to 57 percent, performance improvement of up to 20 percent, and a logic area reduction of 63 percent.
There are certain restrictions on the chip manufacturing hardware Chinese companies are allowed to purchase and I’m not certain how this impacts exactly which chips they can build. There’s no sign of any Western companies moving hardware to SMIC, and the Kirin 710 is an already-proven design. Without knowing what kind of volume SMIC will run of the part, it’s not clear how much impact this will even have on foundry volumes.
What these announcements collectively demonstrate is that China is quite serious about ramping its own semiconductor industry and competing more effectively with companies like Samsung, TSMC, and Intel. Some companies, like GlobalFoundries and UMC, earn a profitable existence serving as second source manufacturers on older nodes with cheap prices. Alternately, they may serve as niche manufacturers for very specific technologies that aren’t available in the general market. GF’s specialized 22FDX and supposedly still-in-development 12FDX would both fall into this category. Only a few attempt to play at the top of the space. SMIC is determined to be one.
Dell demonstrated an astonishing amount of chutzpah today when it unveiled a second-generation Alienware Area-51m — and declared that whoops, the first-generation of the “upgradeable” laptop isn’t actually going to be upgradeable after all. The company has officially stated that: “Area-51m R1 only supports GPU upgrades within its current generation of graphics cards.”
First, a refresher: The original Alienware Area-51m was a desktop replacement laptop with a socketed CPU that could be swapped for other chips and, at least in theory, the ability to upgrade to different graphics cards in the future. An Alienware spokesperson claimed last year that Dell was committed to providing upgrades for the platform, but apparently what Dell meant by that was that it would only provide upgrades within the product family. In other words, if you have a 1660 Ti, you could swap up to an RTX 2080. What you won’t be able to do is improve to any graphics card better than that.
This Has Always *Literally* Been the Problem
The reason laptop graphics cards aren’t upgradeable has nothing to do with AMD, Nvidia, or the PCIe standard. The reason laptop GPUs can’t be upgraded is that no OEM has ever felt it would be profitable to create and commit to supporting a platform for multiple product generations. Laptop GPUs have to be built to very strict size tolerances, which is why there’s never been a single common standard. What Dell promised to do last year, effectively, was to create one, specifically for its Alienware 51-m line of products. Building a common laptop GPU card standard would allow a Dell Mobile (or what have you) RTX 2080 to be swapped for a Dell Mobile RTX 3080 or 4080 when the time came because all of these cards would use the same, Dell-designed form factor.
Dell is going to design custom mobile GPUs to fit its various XPS and Alienware laptops no matter what. This isn’t about whether the company was willing to build a custom GPU. It’s a question of whether Dell was willing to commit to building a series of compatible custom GPUs over time, in order to provide the market with an actual upgrade path. The answer? Even after promising customers that it would provide an “upgradeable” GPU, no, it wasn’t.
I refuse to let Dell even a little off the hook for this. The company communicated that it would provide further upgrades, and it knew damn well that “upgrade” is generally read to mean “components introduced after the laptop’s purchase date,” not “alternate hardware I could have bought at the time, but didn’t.” This was a laptop specifically and directly sold on the promise of offering a compatible platform for future hardware.
“Gamers have made it clear that they’ve noticed a lack of CPU and GPU upgradability in gaming laptops. We decided the best way to deal with that problem was to launch an upgradeable laptop at a substantial price premium, then provide no actual upgrades for it.” — Image by Dell, alternative caption by ExtremeTech
Declaring that the use of a socketed Intel motherboard made the Area-51m “upgradeable” in some fashion now looks like the profoundly cynical move of a company that never intended to deliver what it promised. It was always obvious that Dell’s ability to deliver an upgradeable CPU would hinge on whether Intel launched 10th Gen chips on its existing motherboard platforms or if it required new motherboards. The question of GPU upgrades, on the other hand, was always going to hinge on what Dell was willing to make available. The Area-51m website still claims that the product offers “CPU and GPU upgradability.” It neglects to mention that you’re literally paying for a feature Dell hasn’t previously bothered to support for an entire generation of customers.
And no — the Alien Graphics Amplifier doesn’t cut it. First of all, the Alienware Area-51m isn’t advertised as offering an upgradeable GPU via the AGA; it’s advertised as offering an upgradeable GPU. Second, the AGA is a $220 upgrade. That’s not a terrible price, but we’re already talking about customers who paid a premium for a laptop advertised with CPU and GPU upgradability.
Now Dell is launching a second-generation Area-51m. I’d detail and discuss it here if I had the slightest intention of recommending you give money to a company that treats its customers this way.
I don’t honestly care whether there’s a better chance that the R2 will actually get hardware upgrades. Every single customer that bought an Alienware Area-51m likely bought it expecting to upgrade the GPU much more than the CPU. The entire justification for buying the Area-51m (as opposed to one of Alienware’s other laptops) was the upgradeability. It’s true that Dell never specifically promised that it would offer GPU upgrades for the Alienware Area-51m. All it did was advertise that the laptop’s GPU was “upgradeable” while hiding behind a definition of “upgradeable” that no enthusiast would ever use. This is a distinction without a meaningful difference as far as I’m concerned.
Last year, I was willing to extend the benefit of the doubt when the company began shipping the RTX 2060 and 2070 modules it promised. As I wrote: “The flip side to all of this is that it’s rather nuts to pay $1,140 for an RTX 2080 if you already own an RTX 2060 or 2070. Frankly, it’d be pretty nuts to pay that much money to upgrade from an RTX 1660 Ti to an RTX 2080. But the first run of GPU upgrades for this hardware family was always going to be the weakest upgrade tier. What matters far more is whether Dell continues to put effort into the program in the first place.”
As is probably clear by now, I specifically repudiate my own previously optimistic guidance. CPU upgrades are nearly irrelevant for gaming. GPU upgrades are what matters.
Last year was supposed to be the introduction of the DGFF — the Dell Graphics Form Factor. After today, the company might want to change the acronym. I humbly suggest Dell Gaming-Fully Upgradeable, shortened as “DG-FU,” might be a better name instead. At the very least, it seems to capture more of the company’s actual attitude towards the gaming public.
Dino Crisis (not to be confused with “Dino Crysis,” a game we literally just made up yet somehow want to play) is a classic Capcom series based on the core principles of Jurassic Park, but without any of the troublesome IP entanglements or need to pay licensing fees. The Dino Crisis series was never the hit that the original Resident Evil turned out to be, but it’s maintained a solid base of fans over the years — rumors that the Xbox One would feature a new Dino Crisis 4 circulated widely early in that console’s life.
Now, one fan has taken it upon themselves to do what Capcom hasn’t — extensively remaster the first game. The new mod is a massive update and includes a replacement for the old DirectDraw 5.0 library (implemented via DX9, with DX11 support coming); swaps out DirectInput 3.0/5.0 for Xinput, RawInput, and DirectInput (effectively enabling modern controller support); replaces DirectShow with FFmpeg; adds a new 3D rendering mode and support for up to 4K; solves various frame-rate issues; and adds optional adaptive widescreen modes for aspect ratios like 16:9 and 16:10.
This is a truly impressive set of overhauls for such an old game, though there are some restrictions on playing it. The author, @REBehindtheMask, notes that the patch won’t work with the MediaKite release of the game (it requires the Japanese SourceNext release). You can, however, use a patch available on his download site to make your MediaKite version compatible with ClassicREbirth if you wish to do so.
The original source material rendered at a fixed 640×480 resolution with 16-bit color and was designed in the early days of PS1 emulation. @REBehindtheMask writes: “[The] 3D rendering in this port shows typical issue of early PlayStation enhanced emulation, with wobbly polygons all over the place and textures warped to complete distortion. This happens because CAPCOM literally emulated the whole PlayStation GPU and GTE (Geometry Transformation Engine, i.e. the chip that does 3D transformation) by implementing something akin to emulators like Bleem or ePSXe.”
Weird side effect: apparently Classic REbirth is pretty much compatible with the US version of the game as long as you swap the exe and use the video patch. #DinoCrisis#ClassicREbirth
ePSXe and Bleem were both emulators from the late 1990s and early 2000s, and they were designed to get these kinds of games running on minimum PC hardware of the era, not to provide pixel-perfect recreations of the intended game. ePSXe was developed into the modern era, but the version of it that would have been paired with a game like Dino Crisis to get it running on PCs in the Win9X era very much would not have been a modern emulator. In this case, the re-release of Dino Crisis actually appears to fix some bugs, including audio problems.
Computer games lied to me about my co-workers’ likely job attire.
There hasn’t been any rework of the game’s image assets, beyond the intrinsic benefits of better rendering work, so get ready for a return to some PS1-era classics with regards to low-poly models, low-detail art, and the like. It’s not even clear to me how much a straight AI upscale of the underlying assets would help. While they could undoubtedly be improved in certain respects, increasing overall image fidelity would also draw attention to flaws in the base models. I can’t imagine the gun Regina is holding will be improved if its upscaled texture looks even more like a human femur, for example.
There are still clear limits to what AI upscaling can accomplish in gaming and video — we’re at very early days for the tech. A game like Dino Crisis might benefit more from new textures more than upscaling the old ones. Then again, it may be easier to launch a project like that now that the ancient title is running on something like a modern framework.
Fans have been creating mods and updated art for as long as games have been moddable. But it feels like we’ve seen an uptick in these projects in recent years, thanks to the advent of better AI and post-processing tools, as well as folks like @REBehindtheMask, who launch these kinds of general update projects for multiple titles in a row.
Pick up a 5TB external HDD from WD and you will be able to store many thousands of photos and videos while keeping your computer’s internal storage from filling up. It’s also excellent for keeping backups of important files.
This external drive can store up to 5TB of data with password protection and 256-bit AES hardware encryption. The drive also has a durable metal enclosure, and you can get it from Best Buy marked down from $149.99 to $99.99.
Final Fantasy 7 is arguably the most popular entry in the Final Fantasy franchise, and the new remake promises to be just as good but with updated game mechanics and HD image quality. If you are a die-hard fan of the Final Fantasy series, you can order the remake from Walmart now for just $49.94.
Dell’s new XPS 15 9500 laptops feature 16:10 ratio displays. This particular model has a display with a resolution of 1920×1200, giving you plenty of room on the screen to work with. The notebook also has a quad-core processor that can hit clock speeds up to 4.5GHz. Right now you can get this system from Dell marked down from $1,299.99 to $1,249.99 with promo code 50OFF699.
WD My Passport 5TB USB 3.0 Portable External Hard Drive for $99.99 at Best Buy (list price $149.99)
Before Fortnite, Epic Games’ biggest contribution to gaming was the Unreal Engine. It has been six years since a major update, but that will change with the release of next-generation game consoles from Sony and Microsoft. Today, Epic showed off a demo of the new Unreal Engine 5 (UE5) running on a PlayStation 5, and it’s pretty amazing.
The first thing to know about this demo is that it’s not pre-rendered — it’s rendering in real-time on the PlayStation 5 dev kit. That speaks to the power of the console’s hardware as well as the abilities of the engine, which looks almost photorealistic. Epic designed this demo to show off the two new graphics technologies at work in Unreal Engine 5: Nanite and Lumen.
Epic Games says Nanite is a “virtualized micropolygon geometry” technology, which is a fancy way to say it handles polygon counts so developers don’t have to. There are over a billion triangles of source geometry in each frame of the demo, but Nanite compresses them losslessly to around 20 million drawn triangles. Nanite manages the data streams and scaling in real-time without loss of quality. This will allow developers to import film-quality visuals assets including ZBrush sculpts and CAD files and retain all that detail.
Unreal Engine 5 is built from the ground up for 4k development. The triangles rendered in Nanite are so small they’re essentially pixels on a 4k display. There’s no way to increase the level of detail until 8k or 16k resolutions become mainstream.
Lumen is Epic’s new global illumination tool. Like Nanite, Lumen handles a lot of the heavy lifting in real-time. Developers can specify lighting types and angles, and the engine calculates shadows, indirect lighting effects, and so on. For example, Lumen can react instantly when a character sweeps a flashlight across a room. Developers don’t have to specifically plan for that.
Epic Games says that it hopes the tools built into Unreal Engine 5 will help smaller teams create high-end games. Developers will get access to the UE5 preview in early 2021, and a full release will come at the end of that year. Epic will, of course, migrate its extremely popular Fortnite battle royale to the new engine before the final version is public. So, that will be our first chance to see the new engine in action. Epic says it will let developers keep all the royalties on UE5 games up to $1 million. It will take 5 percent of all sales beyond that, but it will waive the royalties for sales on its own Epic Games Store.
For those on the hunt for a new job or anyone just looking to add some serious qualifications to your resume, you should consider trying to break into the world of ethical hacking.
While your mind instantly goes to scenes of dingy, hoodie-clad bad guys hacking bank accounts and government websites, ethical hacking is not only a hugely in-demand job sector right now, it’s also extremely lucrative. As in, average salaries around $120,000 a year kind of lucrative.
The four-part Complete Cyber Security Course is the crown jewel of this 10-part course package including more than 96 hours of top-flight instruction in becoming a cybersecurity professional. Penetration testing isn’t just about finding vulnerabilities in a computer system, but fixing the problem — and in this training led by expert and noted cybersecurity consultant Nathan House, users are introduced to the tools to do just that.
From understanding operating system security and privacy functionality to maximum security architecture to bypassing censors, firewalls, and proxies, this is the deep-dive exploration any new ethical hacker needs.
Meanwhile, the remaining six courses in this bundle round out any student’s cybersecurity training. Courses explore analyzing network reconnaissance results and implementing countermeasures; spotting major IT threats like botnets, code exploits, SQL injection, and social engineering; how to use Nmap to fully scan and map a computer network; and how to craft your very own premium hacker tools using Python.
The total value of all 10 courses tallies over $1,300. However, the entire collection is on sale now for a fraction of that total, just $39.90.
Note: Terms and conditions apply. See the relevant retail sites for more information.For more great deals, go to our partners at TechBargains.com.
Google repeatedly bungled its attempts to make Android tablets a viable product category, but Amazon has slowly but surely grown its Fire tablet brand without Google’s help. These tablets cost next to nothing, but the specs are lacking. Amazon’s new 8-inch Fire tablet is getting a substantial boost today, though. The new Fire HD 8 and 8 Plus have faster processors, more storage, and USB-C.
Amazon’s tablets run Android, but it’s not the version of Android you’re used to seeing on phones. Amazon took the open-source parts of Android and built a UI around its content and services. That’s part of the reason Fire tablets are so inexpensive — they’re a way to get you to spend more money with Amazon. Android apps on these devices come via the Amazon Appstore, and there are no Google services pre-installed.
The new Fire HD 8 has a 2GHz quad-core chip that Amazon says is 30 percent faster than the 1.3GHz processor in the previous version. That tablet also started at 16GB of storage, a truly anemic allotment these days. The new HD 8 comes with either 32GB or 64GB. RAM is also up to 2GB from 1.5GB last time. A Kids Edition HD 8 is also available — it’s identical to the HD 8 but comes with a large bumper case and an enhanced warranty.
Charging is changing with the new Fire HD 8. Amazon has finally added a USB Type-C port to this tablet, matching the USB-C port on its recently refreshed 10-inch tablet. Amazon has long stuck with microUSB charging because it’s cheaper to implement and many households had a cache of those cables from other devices. USB-C is sufficiently common at this point that the change is warranted.
The Fire HD 8 Plus in its wireless charging dock. The 1280×800 screen is identical across all HD 8 variants.
There is a “Plus” variant of the tablet as well. This device adds a Qi wireless charging coil, and Amazon has a charging dock it would like to sell you. The Fire HD 8 Plus should also charge on any standard Qi pad. The Plus variant also comes with 3GB of RAM and a 9W charger in the box, whereas the base model comes with a 5W plug. Both tablets support up to 15W thanks to the new USB-C port. Most Type-C phone chargers from the past few years should be able to hit that.
These upgrades don’t come without a cost. Specifically, the base model Fire HD 8 is now $89.99, $10 more than the old tablet. The Kids Edition is $139.99. The Plus is $109.99 — you’re paying $20 extra for wireless charging, more RAM, and a bundled 9W plug. These are also the “special offers” prices, which means you can look forward to ads on the lock screen. The version free of ads costs another $15. All three Fire HD 8 tablets ship on June 3rd.
Ever since Apple switched back to x86 chips, there’s been discussion about the CPUs that Intel builds for the company and the degree to which any of them are custom silicon. Now, there’s some evidence that Intel may have reserved an entire lineup of parts explicitly for Apple MacBooks.
When we say that Intel has built a custom chip for a company like Apple, it doesn’t mean Intel has created a different microarchitecture or a complete SoC with a unique set of capabilities only for Apple. While Intel does theoretically have the ability to perform either of those tasks for a customer through a client foundry arrangement, the company doesn’t offer this level of customization. Instead, what Intel has done in the past is offer specific combinations of cache, core counts, and frequencies, in whatever TDP bin the customer requires.
We actually saw one of these types of custom chips surface in the consumer market a few months back, with the Intel CC150 — an 8C/16T CPU with a 3.5GHz constant clock and multi-threaded performance comparable to the 9700K in a much lower power envelope.
Up until recently, Intel had listed a Core i7-1068G7 on the Ark.Intel.com website. Now, that chip has vanished, with the Core i7-1065G7 at the top of the Ice Lake-derived product family instead. The Core i7-1068G7 has quietly moved over to become the Core i7-1068NG7, which reportedly means that the chip is now an Apple-only part.
The Core i7-1068NG7 is a rather nice looking Apple part, at that. With a base clock of 2.3GHz, an all-core turbo of 3.6GHz, and a single-core peak turbo of 4.1GHz, the CPU guarantees a higher overall level of performance than the Core i7-1065G7, which has a base clock of 1.3GHz, a 3.5GHz all-core boost, and a 3.9GHz single-core boost.
Intel Core i7-1068NG7
I would expect the Core i7-1068NG7 to offer at least a modest performance improvement over the i7-1065G7 solely on the basis of its base clock and 28W TDP (as opposed to 15W base with a 25W TDP Up option). A 28W TDP doesn’t mean that a CPU is literally limited to 28W of power consumption at all times — the number refers to the average amount of thermal energy the CPU cooler needs to be able to dissipate over time, not the maximum power consumption of the SoC at any particular moment. With a full 1GHz of additional base clock, the Core i7-1068NG7 should be a little snappier than any other chip on the market.
As for why Apple wants it? Probably to position itself a little better than any other company on the market. There have been rumors that Apple is unhappy with Intel’s long pause on 14nm, and I think we can assume there’s at least some truth to them — Apple, after all, has been famously burned by a CPU vendor’s difficulty keeping them supplied with chips once before. All of the rumors around the Apple-ARM CPU effort suggest that it began in earnest just a few years ago, which also tracks with what we know about Apple’s overall level of CPU performance and the likelihood that it could field an SoC capable of competing with an Intel x86 chip in the first place.
I won’t pretend to know what kind of ARM machine Apple will or won’t launch in 2021, or what the impact will be on Intel. There’s a lot of rumor around these products, but not much fact. I think it’s interesting to see Intel pulling in these chips and dedicating them specifically to Apple, but it also tracks with Apple’s occasional willingness to pay for semi-custom chip designs in the past.
If the rumor mill is accurate, Intel’s Ice Lake won’t be the CPU architecture that faces off against whatever Apple is planning. Tiger Lake is supposed to launch later this year, with Alder Lake expected in 2021. If Apple were to delay its own launch (or if the theorized timeline is incorrect), we might see its own ARM device going up against either Alder Lake or Meteor Lake, Intel’s 7nm architectural refresh.
It’s interesting to see what Intel is doing to potentially tend to the Apple relationship this year. It’ll be very interesting to see what happens next year and the year after that. There are those who believe that ARM has a fundamental advantage over x86 that will shortly begin to prove itself, and those who believe the fundamentals of silicon design now play a larger role in CPU performance than instruction set. Whether an Apple-built CPU core can beat an AMD/Intel CPU will come down the specifics of the microarchitecture rather than the ISA.
Used in this context, Greenburg’s tweet implies that 60fps is the “standard” output on Xbox Series X. He does not declare that the Xbox Series X has specified 60fps as some kind of absolute must-meet target, but I can see why people read the tweet to mean that Xbox Series X games would be required to hit 60fps as a minimum performance target.
This caused no small amount of buzz in the Xbox world, but Microsoft has already pushed back against the rumor, reaffirming that no, they won’t be forcing developers to hit minimum frame rate targets this time around, either:
Developers always have flexibility in how they use the power, so a standard or common 60fps is not a mandate.
Microsoft is trying to have it both ways when it comes to 60fps. To gamers, the company is emphasizing that 60fps is now a realistic expectation for console games because it wants them to look forward to that kind of play.
Developers, however, are being told that they don’t need to hit a guaranteed 60fps in order to build an Xbox Series X game. Developers want to know that they’ve got the creative control to build the game they want to build, or that they won’t be held up in project approval hell over a technical glitch resulting in a less-than-perfect 60fps frame rate.
For better or worse, console and PC manufacturers have never defined a minimum frame rate as part of any formal standard. A developer might choose to define their game’s minimum hardware specification as being the hardware required to deliver at least a consistent 30fps, or they might just test a grab bag of components and take a good guess.
The 30fps “standard” on consoles is no such thing. DigitalFoundry has catalogued a wide range of instances where average frame rates on the Xbox One or PS4 have fallen into the mid-20s, particularly during difficult game sequences. PC games aren’t exactly perfect at this sort of thing, either — well-maintained frame rates at near-minimum specs tend to get called out precisely because a lot of developers don’t put much thought into their minimum game specs.
Microsoft’s official position is that it is offering gamers and developers a superior solution by allowing them to pick a frame rate and a detail level that they want to target with much more flexibility.
“Ultimately, we view resolution and framerate as a creative decision… in previous generations, sometimes you had to sacrifice framerate for resolution,” Xbox developer Jason Ronald told Eurogamer. “With this next generation, now it’s completely within the developers’ control. And even if you’re building a competitive game, or an esports game, or a twitch fighter or first-person shooter, 60 frames is not the ceiling anymore.”
This kind of flexibility is, dare I say it, rather PC-ish. 60fps has been an idealized target for the industry for a long time, but game frame rates are ultimately more about what balance you want to strike between speed and graphics quality than hitting any specific single number. A perfectly smooth 30fps will probably be a better experience than a wildly varying 60fps average. There are no formal minimum frame rate targets in the PC universe, even if hitting 60 fps in fast gameplay is very much expected.
Microsoft needs to be more careful with its language, because painting 60 fps as a “standard” on the platform is the wrong way to communicate what the company is trying to convey: namely that the Xbox Series X can hit 60fps in normal play if developers choose to target it, but is not required to do so. That puts the platform more-or-less at parity with PCs, which operate under similar general expectations but no absolute requirements.
We haven’t heard much from Intel’s storage division lately, but there are rumors that the company will make several announcements in the not-too-distant future.
All of these rumors are from BlocksandFiles.com. Reportedly, Intel is working on a new type of QLC NAND that would see it stretch to 144 layers, up from 96 layers today. Such parts wouldn’t be likely to ship before later 2021, but they’d be a significant capacity improvement over modern hardware. 3D die stacking has been driving the capacity improvements in NAND for the past few years, and multiple manufacturers like Samsung, Intel, WD, and Micron are all moving to 100+ layer designs.
Intel is also still reportedly working on its PLC (penta-layer-cell) technology for NAND, which stores five bits of data per cell, compared with 1-bit for SLC, two bits for MLC, and three for TLC NAND. Each bit of additional storage capacity has always required steep sacrifices in terms of drive performance and long-term program-erase cycles. QLC NAND, for example, is generally recommended for “cool” storage drives and projects with limited write workloads, due to the small number of program/erase cycles available.
PLC NAND would require 32 charge levels to be stored in each cell. Intel, however, seems to think it has a path towards commercializing the technology, and an additional 1.25x increase in data storage per cell would be welcome. Theoretically, the lower write cycles can be compensated for with larger pools of replacement NAND on-drive, and TLC and QLC drives commonly boost their performance by dedicating small amounts of the drive as SLC cache.
New Optane Coming on PCIe 4.0
There’s no support for PCIe 4.0 baked into Intel’s upcoming Comet Lake CPUs, but motherboard manufacturers are loudly signaling that their Comet Lake boards are all equipped for the feature. That means we can expect an Intel chip that does support PCIe 4.0 to show up at some point — and there’ll be Optane storage ready for the standard when it does.
Intel’s storage roadmap along with Xeon platforms. We’re in the second column.
Bringing Optane to PCIe 4.0 shouldn’t change the standing between it and equivalent NAND in any particular. Here’s what I mean: A PCIe 4.0 NAND SSD is faster than a PCIe 3.0 NAND SSD, but it doesn’t automatically offer dramatically better latency than a PCIe 3.0 SSD. A PCIe 4.0 NAND SSD supports the same type of read/write operations as a PCIe 3.0 drive, and it has the same strengths and weaknesses. Intel hasn’t given many details on what to expect from next-generation Optane, but we’re assuming that the storage medium will continue to offer its historical strengths (very strong performance at low queue depths) and that Intel will continue to position it as a DRAM alternative in certain server installations.
I’m very curious to see what Intel has in mind as far as PLC storage, and I’d like to see second-generation Optane create a more meaningful gap between itself and NAND. Optane, I think, has to be graded on a bit of a curve. It’s very difficult for any company to ramp a brand-new storage product up to compete with already-established solutions, which is why it took NAND so long to emerge as a viable competitor to hard drives. To-date, Optane has demonstrated some superior capabilities to NAND in specific use-cases, but neither NAND nor Optane has upset DRAM’s position in the memory hierarchy. Intel wants to push Optane into RAM sockets going forward, so the more DRAM-like its overall performance, the better.
It has been more than a year since Huawei landed on the US Commerce Department’s “entity list,” which prevents it from doing business with most US firms. That cut Huawei’s growing mobile business off from Google’s Android license, resulting in multiple smartphones that lack apps like Gmail, Maps, and Google Search. Huawei can’t release any new phones with Google certification, so it’s releasing the P30 Pro for the third time to continue providing Google apps. The P30 Pro “New Edition” isn’t very new, in spite of the name. But that’s also why it can have those apps.
Huawei found itself on the wrong side of the US government for a variety of reasons. President Trump has been quick to condemn Chinese companies, and US intelligence agencies have expressed concern about Huawei’s close ties with the Chinese government. This move also came as Huawei was making major inroads to supply partners around the world with 5G infrastructure, which the US opposes.
As long as Huawei is subject to US trade restrictions, it can’t get any new Android devices certified by Google. Without Google certification, it’s left with just the open-source parts of Android along with the custom apps developed for the Chinese market. Huawei has launched new flagship phones like the Mate 30 Pro without access to the Play Store, but it also used a loophole to re-release the P30 Pro last year with Google apps. That was after the initial launch in spring 2019 just before the trade ban. Now, it’s basically doing a second re-release with the “New Edition.”
The German ad for the New Edition claims you can use the phone, “as usual.”
The new P30 Pro has the same internal specifications including a Kirin 980 ARM chip and 8GB of RAM. Because it’s the same platform, Huawei can load the New Edition with all the Google services you’d expect on an Android phone. Newer phones like the 5G-enabled P40 Pro cannot run that software. Just like the 2019 P30 re-release, the only difference this time around is a new color option. You can get the P30 Pro New Edition in silver, a color that debuted on the P40.
Currently, the re-released P30 Pro is only available in Germany, but it will reportedly come to more countries soon. Re-releasing last year’s phone might be a good stopgap measure right now, but the P30 Pro isn’t going to remain competitive long enough to make this a long-term strategy. Huawei shipments have dropped about 35 percent outside China, wrecking the company’s plans to become the largest smartphone OEM. Unless something changes, its fortunes outside China are grim.
Dell built this gaming desktop with an Intel Core i5-9400 and GeForce GTX 1660 Ti graphics processor. Together, this hardware can run games with high settings at 1080p resolution. The system also has a unique front panel that looks cool and edgy, and with promo code 50OFF699 you can get it now marked down from $879.99 to just $719.99 from Dell.
Equipped with a powerful Intel Core i5 processor and 8GB of RAM, this system is well suited to multitasking with everyday applications. It also has a 512GB NVME SSD, which can load files quickly and gives you plenty of space to store movies and documents. Right now you can get this notebook from Dell marked down from $699.99 to $599.99.
If you need a desktop for work or everyday home use, this system offers solid performance. The Intel Core i5-9400 processor that comes with this system has six CPU cores clocked up to 4.1GHz, which works well for multitasking. This system also has 8GB of DDR4 RAM and a DVD-RW optical drive, which is becoming an increasingly uncommon option on modern PCs. You can get this system marked down from $998.57 to $549.00 from Dell.
Dell New G5 Intel Core i5-9400 6-core Gaming Desktop with GTX 1660Ti for $719.99 at Dell (use code: 50OFF699 – list price $879.99)
Dell Inspiron 14 5000 Intel 10th-Gen Core i5-1035G1 Quad-core 14″ 1080p Laptop with 512GB SSD for $599.99 at Dell (list price $699.99)
Dell Vostro 3000 Intel Core i5-9400 6-core Desktop for $549 at Dell (list price $998.57)
You’ve heard the old saying before. Numbers don’t lie.
Year after year, engineering jobs pay the highest average starting salary than any other U.S. job sector. It’s just a fact of the numbers. According to the U.S. Bureau of Labor Statistics (BLS), engineers have a median annual wage of $91,010 and the engineering field projects to have employment growth of nearly 140,000 new jobs over the next decade.
With millions reevaluating their place in the job market right now, this might be the perfect moment to set your sights on the most lucrative job field out there today. And that includes training in math, the beating heart of essentially any engineering job.
If algebra was around the place where you started to check out on math in high school, the Algebra 2: The Complete Course can get you back on the horse, featuring in-depth looks at geometry, inequalities, functions, graphs, and more. Matrices: Learn the Foundations for Linear Algebra and Complete Linear Algebra for Data Science and Machine Learning take that learning to the next level with an understanding of how those figures are used by creators in some of the cutting-edge realms of data sciences and artificial intelligence creation.
But as in high school, algebra advances into calculus, which leads to the four-part Intuition Matters! Applied Calculus for Engineers training led by noted aerospace and robotics engineer Mark Misin. His approach to teaching focuses on learning what concepts like vectors, linearization, wave representation, and others actually mean, so it becomes actual knowledge and not just rote memorization.
The course package also features close examinations of statistics in the Complete Statistics for Data Science and Business Analytics course; as well as the Complete Electricity for Electronics, Electrical Engineering training, both offering practical use examples of how engineering principles can not only expand your skillset but lead to a prosperous new career.
Courses in the package routinely range for $50 to $199 each, but with the current offer, you can get all nine courses now for just $28.99.
Note: Terms and conditions apply. See the relevant retail sites for more information.For more great deals, go to our partners at TechBargains.com.
China celebrated success last week when its prototype crewed spacecraft returned to Earth after several days in orbit. It got there with the aid of a new, more powerful rocket called the Long March 5B. The core stage remained in space until yesterday when it splashed down in the Atlantic Ocean. This was no piddly little chunk of space debris, though. The Chinese rocket stage was the largest uncontrolled reentry in decades.
The Long March 5B is China’s next-generation launch platform with a payload capacity slightly higher than the SpaceX Falcon 9. China hopes to use the new rocket to assemble a modular space station in orbit of Earth, so it weighed down the test capsule with extra fuel to simulate 20-ton station segments. The spacecraft returned to Earth safely last week, but the core stage rocket remained in space until yesterday when it plummeted uncontrolled to Earth.
Most space launches include the release of rocket stages over open ocean, or in the case of SpaceX, landed on a drone ship. Uncontrolled reentry of space debris is not uncommon, but the pieces are rarely as large as the 17.8-ton Long March stage. This was the largest uncontrolled reentry since the 39-ton Soviet Salyut 7 space station crashed to Earth in 1991.
At 11:21 Eastern time the CZ-5B rocket is predicted to pass 170 km directly above Central Park, New York. I've never seen a major reentry pass directly over so many major conurbations!
The odds of an unneeded satellite or rocket segment hitting anything important are small, but space agencies still try to drop them in the ocean via controlled reentry. China did plan for the booster to fall to Earth, but it didn’t know where. Its resting place turned out to be just off the coast of Africa, about 100 miles from Mauritania.
Astronomer Jonathan McDowell from the Harvard-Smithsonian Center for Astrophysics says he’s never seen a major reentry pass over as many populated areas as the Long March 5B did. It even flew just 105 miles (170 kilometers) over New York City during its descent. Being an uncontrolled reentry, the Chinese government didn’t have any say over the course the stage took, but you could argue it was irresponsible to allow it to reenter the atmosphere without any plan.
China is moving aggressively to send astronauts beyond low-Earth orbit, and the recovery of its experimental capsule from a high orbit is a significant step in accomplishing that goal. We can expect more rocket segments to drop out of the sky as China continues its testing. Hopefully, they’re a bit more careful where the equipment drops, though.
AMD has announced its new B550 motherboard family as the follow-on successor to the B450, with new features like PCIe 4.0 and multi-GPU support, though that feature is of questionable value these days. Still, it’s an update to the B450 that brings a faster interface — as well as a new wrinkle for AMD fans.
When AMD launched Ryzen in April 2017, it declared it would support Socket AM4 at least through 2020. Many fans read this as a promise that AMD would support the same motherboard chipsets for the duration of the AM4 socket. That difference of interpretation has caused some confusion about what kind of support matrix AMD would offer for Ryzen motherboards as the CPU family evolved.
With the launch of the B550, AMD is making a break between current and future CPU support. The following chart explains which motherboards support which CPUs, now and in the future:
This chart indicates that AMD Ryzen 3000 CPUs aren’t supported on X370, but that may reflect the fact that support is on a case-by-case basis for that platform. In any event, what we see here is that there will be no support for future AMD Zen 3 microprocessors on either the 300 or 400 series boards. If you want a platform that’s guaranteed to be upgradeable in the future, you’ll need to move to X570 or B550 to do it.
How much upgradeability will that get you? That’s uncertain. We don’t know when DDR5 will be introduced, and AMD is expected to use AM4 until it is. It seems likely we’ll see at least one more DDR4 cycle after Zen 3, and AMD has promised to continue to keep improvements rolling in, generation-on-generation. So far, the company has done an excellent job delivering on those promises.
Did AMD Deliver on the Spirit of Its Promise?
AMD may not have promised to provide chipset support through 2020, but plenty of people heard the statement that way. So, did the company provide the upgrade path it implied existed? I would argue yes.
In April 2017, a top-end Ryzen system consisted of an X370 motherboard and an eight-core Ryzen 7 1800X CPU. Today, just over three years later, that same motherboard is likely capable of stepping up to a Ryzen 9 3950X. In well-threaded tests, the 3950X can hit over 2x the speed of the 1800X. Even in single-threaded tests, the 3950X is often 1.25x – 1.35x faster than the 1800X.
None of this automatically makes a 3950X a great upgrade for an 1800X owner — if you don’t have any workloads that can scale up to 12-16 cores, you aren’t going to see the same benefit as someone who does.
This isn’t the first time we’ve seen a motherboard deploy with support for, say, dual-core CPUs, only to add support for quad-core or even six-core chips as those solutions became available, so I can’t say that Ryzen 7 delivers a completely unprecedented upgrade. But it’s certainly one of the best overall upgrade values that we’ve historically seen. Realistically, I’d expect an X370 system rebuilt on Ryzen 9 3950X to still be an effective performer in 4-6 years. Desktops don’t age like they used to — I’m typing this on a Core i7-4960X that’s continued to provide perfectly adequate performance for gaming and desktop work. Even if a person swaps out in 2024, that’s a seven-year lifespan for the AMD system.
Granted, it is a little annoying to have to keep track of all the different support diagrams, so make certain you know what you are getting into before you buy. It’s not clear how many more product cycles we’ll see on AM4, but the relatively slow rate at which desktops age makes this much less of an issue than it used to be.
Approximately 7,000 years ago, a group of ancient humans built the first-known seawall in an attempt to keep rising tides from inundating their village. It’s the earliest known structure of its type and it shows how Neolithic communities attempted to adjust to the massive sea-level rise kicked off by the ending of the last Ice Age.
The wall is located at Tel Hreiz, a settlement in what is now Israel that existed some 7000 – 7500 BP (Before Present). A number of Neolithic settlements have been found submerged off the coasts of various countries — the earliest human settlements were often near both oceans and rivers, and older settlements are found farther off-shore than later ones, illustrating how sea-level rise at the end of the last ice age impacted human settlements. When constructed, the now-submerged village sat nearly 10 feet above the water. This settlement would have been created during a time when sea levels were rising rapidly, with a mean annual change of 2.6mm per year.
Tel Hreiz was identified as an archaeological site in the 1960s but has never been systemically excavated. The area has instead been surveilled for features when storms and tides exposed previously hidden sections of the village, including the highly unusual sea wall, which was exposed by storms and surveyed in both 2012 and 2015. It runs for more than 100 meters with a dog-leg in one section. The northern limit of the wall has been found, but the southern remains under sand, and the wall’s full extent is unknown.
The image below shows where various artifacts have been found, and their relation to the wall itself.
Various objects located during the investigation and their locations relative to each other.
So, how do we know it was specifically built to hold back the ocean, rather than for some other purpose? There are multiple clues.
First, when the Pottery Neolithic village stood here, the sea wall sat between the sea and the village. Only one hearth has been found on the seaward side of the wall and the land between the wall and the sea, known as a swash zone (the turbulent area where waves wash up on the beach after breaking) wouldn’t be good for grazing, animal husbandry, or freshwater supply. The chance that the structure was intended to be a harbor or breakwater is remote; the earliest-known stone-built harbor dates to 4500 BP, thousands of years after this structure was built. Early harbors were built in areas with natural features like bays, which Tel Hraiz lacked (and still lacks today). Archaeological finds suggest the village persisted for 300-500 years, that it was initially built well away from the water, and that the rapid sea-level rise occurring all over the world as a result of glacier melt inundated many of these communities. Drowned prehistoric villages are fairly common all over the world, but none so old has ever had a feature like this.
It is not clear why the villagers thickened one section of the wall, but it was built with several different architectural styles. The authors note:
Despite these different building styles, the boulder-built feature is a continuous and unified architectural entity which forms a wall. This is evident in the arrangement, nature and size of the stones; aside from the small dogleg, the boulders are aligned in a consistent and uniform direction and make up a relatively straight and continuous line parallel to the coast. They also follow the same bathymetric depth contour; representing the past topographic contour of the prehistoric coastline. Notably, for its entire length, it is free-standing and with the exception of the apparent stone wall fragments associated with the dogleg and the hearth (see below), the wall is not attached to any domestic structure in the village.
The boulders they used didn’t just come from farther up the shore — the nearest source of these stones would have been the riverbed and river mouth of what we now call the Oren and Galim Rivers, 2.28 miles (3.8km) and 0.96 miles (1.6km) away, respectively. Individual stones have likely shifted and some may have washed away in storms, but the wall remains a contiguous and highly visible feature in the landscape — and very clearly artificially constructed.
Even a mile is a fair distance to walk with stones this large. The visible rocks of the wall are roughly 20-39 inches in diameter (50-100cm), about 39 inches (100cm) tall, and weighed 200-1000kg. For a small community in the Pottery Neolithic time period, this wall was a huge investment in resources, and it may have been extended one or more times to provide additional protection. There is a second known example of an ancient seawall in the area, though it’s still much younger — there’s a boulder-built seawall dating to 3100-3500 BP in Atlit North Bay, some 1.8 miles south of Tel Hreiz.
This image shows the shift in Modern Sea Level (MSL) between today and the PN (Pottery Neolithic) period. PNSL refers to the sea level during the Pottery Neolithic period, approximately 7000 – 7500 BP (Before Present).
In the end, all the pieces of the puzzle point towards the same conclusion. A group of humans founded a village during a time when the town sat high and dry. Decades or centuries later, they realized that the changing climate threatened their way of life and they fought to keep their homes. Every single boulder in that 100+ meter wall represents a 200-1000kg rock moved over distances of at least a mile at a time when every calorie was dearly purchased. They might have had cows. They didn’t have wheels. They did it anyway.
We know that Neolithic people deployed a variety of sophisticated strategies to manage water in various ways (the article goes into more detail on this), but the seawall found here is unique for its age. The world of that era was astonishingly empty by our standards. They could’ve gone somewhere else, right?
Maybe. But they didn’t. There’s something deeply human in that. The meters of sea-level rise they were forced to contend with would challenge the flood-managing capacity of a modern first-world nation, even in a best-case scenario. What did they have? Rocks. Imagine looking at the rising ocean when the first, last, and pretty much only tool you’ve invented for holding it back is sticking two stones next to each other and filling gaps with sand and clay.
“If they’d known what they were up against, they never would have tried,” some might say, ignoring the history of every doomed cause, last stand, and hopeless fight on Earth. Truth is, they might have tried anyway. There’s something deeply human in that, too. It’s what connects the nameless Neolithic residents of what we now call Tel Hreiz with people across the Pacific on low-lying islands, or in the Arctic, where melting permafrost is driving significant land losses and temperatures have warmed faster than anywhere else on Earth. The amount of ice melting off Greenland each year is accelerating and the island now loses 7x more ice per year than it did in the 1990s. Luckily, the rate of sea-level rise happening now is much lower than when the great ice sheets were melting off the planet, but the number of people at risk today is larger than the total number of humans alive on planet Earth in the Neolithic era.
We, of course, have far more than rocks at our disposal. But we also have far more to lose and fewer places to go.
Updated (5/12/2020): This is one of my favorite stories that I’ve ever written, though I’m not just resurfacing it for that reason. A number of readers raised additional questions of interpretation in the comments — namely, why do we think this is a wall built against sea-level rise, rather than for other purposes? There are answers to these questions, and I’d like to address them.
Humans build walls for specific purposes. To understand what those purposes are, archaeologists examine a number of site characteristics, including the layout and structure of the wall, its location relative to the town or village it guards, and what other people in the area were building at the same time. This wall would not have been built for agriculture — it’s far too close to the historical shoreline, and the area was periodically inundated even before the sea began to rise. The ground would not have been good for grazing or beast-penning for the same reasons (we don’t actually know if these people had domesticated cattle yet, so it’s not even clear they’d be building these kinds of pens yet in the first place).
The wall wasn’t a defensive structure (it would have curled around the buildings to defend from land as well if it had been). It wasn’t an agricultural terracing system — there’s no evidence of other terraces, and you wouldn’t build a large, complicated terrace directly over the rapidly-rising ocean.
Could they have been fish traps? This was an interesting question, but the evidence suggests no. The Stilbaai Tidal Fish Traps are known examples of ancient fish traps found on the coast of West Africa. One major characteristic of these structures is that they are enclosed, made of tightly packed stones, and do not resemble the overall structure above. If the thickest part of the wall above had been hollow in the center (and larger), it might have functioned in this manner, but neither is the case.
It may help to know that we have found a great many drowned littoral villages of this sort all over the world. We have known for centuries that rivers and oceans helped nourish (literally and figuratively) early human civilizations. Drowned villages in areas corresponding to lower sea levels during the last ice age are common. There are signs in other villages nearby of various other measures humans took to combat climate change, including digging fresh wells and attempting to raise the water table to avoid brackish contamination from rising water.
We know, factually, that humans once dwelled in areas that we are now cut off from due to sea-level rise. We know that humans generally fight to remain in their homes, then and now. The surprise of discovering that humans built a seawall to fend off climate change is surprising for how early they built it, not for the fact that they tried.
The Trump Administration is reportedly in talks with both Intel and TSMC to open new foundries in the United States. The discussions are part of the Trump Administration’s efforts to reduce the US semiconductor industry’s overall reliance on China as a source of manufactured goods.
Much of Intel’s manufacturing capability is already in the United States, but a new TSMC foundry would be that company’s first US installation. TSMC’s interest in building a US plant is supposedly related to its relationship with Apple. TSMC has confirmed that it is interested in building an overseas plant but has said nothing about the location. Intel, in contrast, has been more forthcoming.
“We’re very serious about this,” Greg Slater, Intel’s vice president of policy and technical affairs, told the Wall Street Journal. “We think it’s a good opportunity. The timing is better and the demand for this is greater than it has been in the past, even from the commercial side.”
That’s not a surprising attitude for Intel to take. The company has been capacity crunched for the last few years. Back in 2014, Intel made the decision to delay finishing Fab 42 in Chandler, Arizona. While this didn’t initially cause any problems, it would later compound the company’s manufacturing shortage from 2019 – 2020. Intel was unable to manufacture enough CPUs to meet 100 percent of market demand because its 10nm transition stalled out while demand for higher core-count 14nm CPUs jumped dramatically. Intel has restarted work on Fab 42, but demand for the company’s largest-core CPUs has been rising steadily.
The one thing that’s unclear from the WSJ article is whether this facility would be a foundry, specifically, or if it would also handle other parts of CPU manufacturing. There’s a great deal of talk about building US factories due to the fragility of Asian supply lines, but CPUs today are still often shipped to test and assembly facilities outside the United States. Bringing all of that work home requires more than just a foundry.
Inside a TSMC foundry facility.
A TSMC factory in the United States would arguably be the larger win for supply chain stability. Intel already has multiple factories in the US, and Samsung has a single facility in Austin. A major TSMC plant would give all three top-end foundry manufacturers a presence in the US. GlobalFoundries, while no longer a leading-edge provider, is already based in Malta, NY. TSMC is currently in the overall lead in terms of node progression, with Intel hoping to take the lead again by 5nm. Of course, having hardware built in the US for foreign companies (from the US’s perspective) would also give America more leverage over a company like TSMC in the event of a trade dispute.
Even if Intel or TSMC reach an agreement with the US government to build a new foundry, it’ll be a few years before the facility is up and running. Foundries typically require 3-5 years to build, at $10B-$20B per factory.
Lenovo’s versatile Yoga C740 notebook features a 1080p touchscreen display that can be rotated around to put the system into tablet mode. The Yoga C740 also features a sleek aluminum body and a fast Intel Core i7 processor. Right now you can get it from Lenovo marked down from $929.99 to just $779.99. Just use promo code SNEAKPEEKMD7 at checkout.
If you’re telecommuting to work lately, you probably want to invest in a quality webcam for your virtual office meetings. Lenovo’s 500 webcam fills this role quite well with a 1080p image sensor. It can also be used to lock and unlock your computer using Windows Hello facial recognition software. Right now it’s on sale marked down from Lenovo from $69.99 to $45.99 with promo code EXTRA8ACC.
This high-end display features a 4K display panel with high color accuracy, including support for 99.5 percent of the AdobeRGB color gamut. This makes it well suited for professional-grade image editing. Currently, you can get it from Lenovo with promo code MONITOREXTRA5, which drops the price from $619.00 to $440.81.
Lenovo Yoga C740 Intel Core i7-10510U Quad-core 14″ 1080p IPS Touchscreen Laptop with 512GB SSD for $779.99 at Lenovo (use code: SNEAKPEEKMD7 – list price $929.99)
Lenovo 500 1920×1080 FHD Webcam for $45.99 (Ships in more than 5 weeks) at Lenovo (use code: EXTRA8ACC – list price $69.99)
Google’s annual I/O conference was supposed to happen this week. Alas, like many other gatherings, it’s been canceled due to the ongoing coronavirus pandemic. We expected all the I/O announcements to come by Friday, but it’s looking like Google may postpone some of its most important ones. Documents from Vodafone indicate the hotly anticipated Pixel 4a launch has been pushed back from May to June.
Last year, the launch of the Pixel 3a coincided with Google I/O, and leaks this year made it look like the 4a would be the same. However, Google canceled not only the physical event but also the online I/O conference it initially promised. The next thing on the agenda is Google’s Android 11 beta launch, which will take the form of an online keynote on June 3rd. We haven’t seen any evidence of a deluge of impending Google announcements, so what of the Pixel 4a?
According to a leaked Vodafone document, the carrier expects to begin selling the Pixel 4a on June 5th. That’s much later than the May 11th date the carrier had listed previously. Of course, we can only speculate about the reason for the delay. Google’s teams are still adjusting to remote work, which is one of the reasons cited for the lack of an online I/O replacement. It could be that the final phases of development and testing for the 4a have just taken longer than expected.
The Pixel 4 hasn’t been selling well, and Google has been discounting it aggressively as a result.
However, there’s also an iPhone-shaped elephant in the room. Apple launched the 2020 iPhone SE recently, and it starts at $400 just like the Pixel 4a (probably) will. Apple’s new phone sports much of the same hardware from the more expensive iPhone 11 family, thanks to its carefully controlled supply chain. Google’s mid-range Pixel will have predictably mid-range components, and that could mean it won’t compare very favorably to the iPhone SE.
The delay might help Google make some last-minute adjustments to the marketing or pricing, but it will, at the very least, give Google some distance from the iPhone announcement. The Pixel 4a is an important phone for Google’s hardware division. The company’s flagship phones haven’t sold very well lately — we know the Pixel 3a gave the company a much-needed boost last year after disappointing Pixel 3 sales. The Pixel 4a could do the same in 2020.
Experience is the greatest possible teacher. So when you cross paths with someone who’s been through the ups and downs of a particular road, understanding that journey can help make their experience your own.
“Nothing ever becomes real ’til it is experienced.” ― John Keats
Experts are those who have experienced and triumphed — and in areas like career success, self-motivation, creativity, and more, those experts can hold the key to your own mastery of those vital disciplines.
That’s the philosophy behind Big Think Edge, a video series that asks some of the greatest contemporary thinkers and doers to detail their personal path, offering you insight into following the path to your own success. Right now, a lifetime of access to Big Think Edge’s expanding collection is just $159.99, more than a third off its regular price.
With over 1,000 video lectures already in their growing archives, the Big Think Edge library is chock full of mentors and approaches to personal greatness in dozens of important areas that can benefit your day-to-day personal and professional journey.
From Elon Musk to David Stern, from Richard Branson to Neil deGrasse Tyson to former President Jimmy Carter, Big Think Edge’s roster of expert presenters is a veritable who’s who of game-changers in the fields of politics, business, science, self-help, creative arts and more.
If you want to learn creativity, who better to explain than a comedy legend like John Cleese? If you want to know what it takes to be the best in your pursuits, who better to ask than a man who wrote his own book on personal greatness, best selling author Malcolm Gladwell? And if you want to help steel your mind and overcome your own limitations, who better to help you past mental roadblocks than a former Navy SEAL like David Goggins?
Each week, Big Think Edge unveils three exclusive new life and career lessons as recounted by acclaimed academics, successful business leaders, and learned Nobel Prize winners. Up and down their roster of big names, you’ll undoubtedly find a visionary voice ready to impart their often hard-earned wisdom to you.
There are persistent rumors that both Diablo II and Diablo IV could drop before the end of the year, alongside the upcoming World of Warcraft expansion, Shadowlands. Blizzard has reportedly come under pressure to show more return on investment on an ongoing basis over the past few years, and this push towards multiple simultaneous launches at the back half of the year could be the company’s way of moving in that direction.
The rumor comes from French site ActuGaming, which has previously broken accurate rumors about upcoming Blizzard projects. Supposedly Vicarious Visions is supporting Blizzard on the remakes, which would make some sense; that company has been involved in a number of remastering efforts over the past few years. A launch date before the end of the year would put Diablo II: Resurrected in danger of colliding with Diablo IV, which might not be something Blizzard wants to tee up.
Will This Be Starcraft: Remastered or Warcraft: Reforged?
The problem with hearing that Blizzard is returning to Diablo II is that, as great a game as Diablo II was — and I loved it enough to launch my own modding efforts around it 20 years ago — it, like Warcraft III, could use more than just a coat of paint. There were some significant design limitations in the original Diablo II that limited the ability of early-game skills to scale, thereby locking endgame play into a smaller set of talents and capabilities than looked as if would be the case, starting out.
ActuGaming points out that Blizzard is aware of the communications disaster around WC3 Reforged and considers to have been a mistake of communication. The problem with WC3 Reforged, apparently, was that Blizzard miscommunicated that it would be a remake while it was actually just a remaster.
That’s true… to a point. But only to a point. What people hated about WC3 Reforged was partly the fact that the new game forced all previous owners into a new front-end, stripped out previous support for game modes, and generally harmed the experience of people who had deliberately chosen not to buy Warcraft III: Reforged. It’s true that Blizzard set expectations higher for a better version of the game, but there was nothing wrong in doing so. Both Warcraft III and Diablo II could use more than just a fresh coat of graphical paint.
I’m not suggesting that these titles should be fundamentally overhauled, by any means, but Diablo II would scarcely suffer from a slightly more fleshed-out plot, a bit of new lore, or some fleshed-out side quests. Partly that’s because Diablo II has always felt thinner compared with other Blizzard worlds, as far as on-the-ground interactions with the major players. Deckard Cain and Tyrael were really the only fleshed-out NPCs in the entire game — everyone else was a character you exchanged a few optional bits of dialog with, in-between missions. Starcraft, which dates to approximately the same era, put a much larger emphasis on NPC and plot development. While Diablo II had a multiplayer component and was heavily played online, it never evolved into the competitive esport that Starcraft did, and it did not be locked to a slavish interpretation of every single original rule with no room for experimentation.
I agree with Blizzard that it miscommunicated badly around Warcraft III: Reforged, but I don’t think the problem was that the company mis-set expectations. I think the problem is that the company committed to one vision of the product and delivered a vastly inferior one, which came with penalties that hit people who weren’t even interested in the game while offering no one what had actually been promised. If Blizzard had delivered the game it had promised, rather than the half-baked version it launched, some people might have still been unhappy with the changes, but the majority would have recognized the strength and improvements to the product.
Warcraft III: Reforged feels like it was built by people who were slavishly devoted to duplicating the wrong aspects of the game, which is how we got lavish recreations of badly built cinematics that should have either been left in their original forms or redesigned as fundamentally different encounters. It’s bad enough that I’d argue the fan-made remaster of the original Arthas v. Illidan fight is better than the one we actually got from Blizzard.
If Blizzard wants Diablo II to be well-received, it needs to demonstrate it understands what people liked about the series and want to return to in the first place. After Reforged, that’s not as certain as it used to be.
We’re at the dawn of a new game console generation, and both Sony and Microsoft are hoping to come out of the gate with strong sales. However, numerous reports claim that Sony is having trouble keeping the PlayStation 5’s price down, and now some industry veterans think they’ve figured out Microsoft’s strategy. The company may just be waiting for Sony to announce a price so it can undercut the PS5 by a noteworthy amount.
Wedbush Securities analyst Michael Pachter and former EA and Microsoft executive Peter Moore appeared on a podcast recently to talk about the game industry. The pair talked about console launch strategy with particular emphasis on how much the hardware will cost. Neither Microsoft nor Sony have talked about pricing, but numerous reports claim that Sony’s cost to manufacture the PS5 is much higher than previous consoles.
Both the Xbox Series X and PlayStation 5 will rely on an eight-core, 16-thread AMD processor and an RDNA 2 GPU. Sony’s implementation is a bit different with support for flexible clock speeds and Adaptive Voltage and Frequency Scaling that sends unused power from the CPU to GPU. It also has a faster custom storage platform. Sony is reportedly looking at around $470 to manufacture each console, so much that it has allegedly opted to scale back the number of units it plans to manufacture for launch.
Sony PS5 DualSense. The console it goes with is still a mystery.
It’s not unusual for companies to make little to no money on game console hardware when new generations launch — it’s all about getting people locked into a platform for the next five years so they’ll buy games, controllers, and online services. Pachter noted that Sony looks to be targeting $500 for a launch price, but Microsoft has more cash on hand than Sony. It could afford to lose money on the first 10 million units.
So, Microsoft might be ready with a price, but it’s waiting on Sony to announce that $500 price tag. Then, Microsoft will very publicly undercut Sony by as much as $100. A $400 price tag would make the Xbox Series X a much more attractive purchase this holiday season, and that could give Microsoft a big advantage going into this new console era. Microsoft will also be able to lean heavily on its growing Game Pass subscription service and the xCloud game streaming platform.
On Friday, TheNew Republic published an article by Christopher Ketcham, under the thoughtful and modest title, “Is 5G Going to Kill Us All?”
Preserved here, just in case they have an outbreak of sanity and decide to change the title.
It’s astonishing to see an article like this run in a publication of TheNew Republic‘s history and caliber, particularly at a time when 5G conspiracy theorists are actively destroying cell phone towers and wrecking installations thanks to baseless conspiracy theories linking 5G to coronavirus. There have been 77 arson attacks since March 30, with staff reporting 180 incidents of abuse. Articles like Ketcham’s only fan the flames.
Let’s Talk About the Author
I can’t speak to any of Christopher Ketcham’s writing on any other topic, but when it comes to wireless technology, he’s been banging the same drum for a decade — and using exactly the same rhetorical techniques to do it.
In a story written in 2010, Ketcham begins by telling us the story of Allison Rall back in 1990, a young mom with three children whose cattle sicken and children fall ill after a cellular tower is installed nearby. He immediately ties her case to a statement by an EPA scientist named Carl Blackman, who tells us/her, “With my government cap on, I’m supposed to tell you you’re perfectly safe,” Blackman tells her. “With my civilian cap on, I have to tell you to consider leaving.”
In the most recent story, we are introduced to Debbie Persampire, a woman “who believes cell phones are poisoning her children.” Ketcham presents this statement uncritically, even as he describes how the woman covers the rooms of her house in an EMF-reducing paint that sells for ~$66 per liter. Her family, we are told, “trusts her.” Whether her doctor trusts her is not discussed.
From that point, Ketcham pivots. Now, we’re told that a 2018 study by the National Toxicology Program discovered evidence that exposing rats to cell phone radiation can cause various forms of cancer. Again, it’s the exact same story structure — a sympathetic emotional hook, a mother in desperate straits, and finally, a government figure or body with critical information showing a major problem that somehow, somehow, has been swept under the rug.
The only problem is, it’s claptrap from start to finish.
Let’s talk about why.
As Ars Technica has detailed in multiple stories, the NTP report Ketcham uncritically quotes is riddled with methodological flaws to the point of uselessness. For starters, the control rats — the rats not being exposed to any radiation — died at nearly twice the rate of the exposed rats. Right off the bat, that’s a massive problem — the control rats died so quickly, they don’t represent a control group at all. Furthermore, the result makes no sense on its face. There is no known biological reason why rats being exposed to cell phone radiation would live longer. Clearly something else was impacting the male rate population.
Furthermore, the higher incidence of cancer that Ketcham refers to was only found in the male rats, where 48 percent of the control group died early. In female rats, where this did not occur, incidents of cancer between the two groups were identical. The control and exposed groups of mice, tested under the same protocols as the rats, saw no change in cancer rates.
Ketcham does not address these points. Instead, he pivots to a 2011 report by the International Agency for Research on Cancer, finding that cell phone radiation is a “possible human carcinogen.” This is true. But he completely neglects to report any of the context of that finding.
The WHO classifies cell phone radiation as a Category 2B risk, meaning “This category is used for agents for which there is limited evidence of carcinogenicity in humans and less than sufficient evidence of carcinogenicity in experimental animals.” For comparison — because context is important — processed meats, including bacon, hot dogs, and sausage are classified as Group #1 — “Carcinogenic to humans.” Red meat like beef, pork, and lamb is Group #2A: “Probably carcinogenic to humans.”
In other words, if you think it’s justified to get upset over the Group 2B classification on your Wi-Fi but aren’t worried about the bacon-wrapped steak you just ate for lunch, the WHO believes your priorities are vastly out of whack.
Ketcham loves to draw frightening associations in his texts. Readers, for example, are told that what little we know about 5G spectrum usage comes from military applications, which “gives some observers pause.” After all, the government has a weapon called the Active Denial System, which uses millimeter waves to make your skin burn painfully. The fact that the AWS is designed to hit targets with a 100kW output beam is conveniently ignored.
That looks EXACTLY like my cell phone. Especially the giant pain-firing radar dish on the top.
Pro Tip: Do not stand in front of anything that outputs 100kW of energy. No matter what it does, you will not like it.
Near the end of the article, Ketcham again grounds his critique of 5G in the poorly regarded, highly erroneous (as-in, shot full of errors) Ramazzini study, again meticulously deconstructed here, by Dr. John Timmer of Ars Technica. Again, none of these errors are mentioned in the piece he writes, which collectively paints the picture of an FCC overrun by industry hacks and individuals less interested in truth than in a rush to judgment to placate the industry.
This is not a piece of journalism. It’s a piece of propaganda written by an author who knows exactly how to create a solid-seeming article, to feed a line of argument he’s been making for a decade using the same rhetorical techniques and half-disclosed facts. The New Republic is in desperate need of a science editor.
5G is a lousy technology. Qualcomm, Verizon, AT&T, and the other companies that deploy it have been more than willing to misrepresent various aspects of the service. The chances that anyone anywhere will benefit from 5G deployments right now are minimal.
But the reason 5G antennas are sprouting up by the hundreds isn’t that corporations want to saturate us in dangerous EMF. It’s because 5G signals are so short-range and weak, it takes hundreds of antennas to get any signal anywhere. The very facts that make 5G a laughable source of bodily harm are the reasons Ketcham leans on to paint it as an ominous threat.
5G does not cause cancer. LTE does not cause cancer. 3G does not cause cancer. 2G did not cause cancer. Your home microwave doesn’t cause cancer, either. They don’t cause coronavirus. Electrosmog does not exist. Wearing tinfoil around your head may treat your mental condition via the placebo effect, but it isn’t going to do anything else. Repeated tests of volunteers who claim to be sensitive to EM fields have demonstrated these individuals cannot tell when an EM field is active in a room.
By providing a platform to Ketcham, The New Republic has made itself a mouthpiece for a small handful of individuals who have maintained that wireless technology represents a massive threat to human life, even as the studies that they claim support their arguments collapse under the weight of methodological errors. Ketcham ignores the tremendous flaws in his own arguments. Don’t be fooled.
With a new console generation on the horizon, Microsoft, Sony, and gamers are all collectively bracing for the impact on next-generation console storage capacities. While it’s true that game sizes have grown for decades, what we’re seeing for the first time with this upcoming generation is that hard drive / SSD sizes really aren’t increasing all that much to keep pace. The Xbox Series X will offer a 1TB SSD, which is the same capacity as the Xbox One X (though vastly faster) and only 2x larger than the Xbox One in its launch configuration back in 2013.
The problem is, of course, game sizes haven’t merely doubled since then. We’ve now got titles like Call of Duty: Modern Warfare at an eye-popping 185GB. Remember back when Titanfall, at 50GB, was raising eyebrows and breaking records? We’ve exceeded that by 3x in the past seven years.
According to Microsoft, it has a compression technology, BCPack, intended for this generation that should substantially improve the situation. First, the company has built hardware-level decompression directly into the console, reducing the overhead from handling the workload at top speed from ~3 CPU cores to nothing. There’s a dedicated controller handling this task now.
That’s beneficial for performance, but it doesn’t do much to help your hard drive’s aching bits. According to reports online, however, Microsoft’s new texture-packing methods have hit unparalleled heights of compression, reducing size by as much as 50 percent compared with current methods. Sony’s Kraken, in contrast, will supposedly improve texture compression by about 30 percent.
I'm interested in the details because I want to know what tech console game developers have access to in hardware. I've been writing lossless/texture codecs for around 15 years, so it's key info.
This is potentially huge. Most of the data held in VRAM or transferred across the PCIe bus is fundamentally texture data. With a single uncompressed 4K texture now as large as 8MB, most of what gets stored on an HDD for a game is texture data. Improving compression algorithms and implementing hardware-based decompression is how Microsoft is hoping to keep costs down without giving up on next-gen fidelity.
At the same time, though, we’ve seen the impact of changes like this before, as when Microsoft introduced a 30 percent compression ratio improvement during the lifetime of the Xbox 360.
There’s a really great presentation over at VentureBeat on the cost of making games and how it’s changed over the past few decades. While it dates to 2017, I only recently discovered it, and there’s some useful information I haven’t seen before, measuring concepts like the cost-per-byte to develop a AAA game. One of the more interesting findings of the report is that bytes don’t increase in a stepwise fashion with each console generation.
This is a log scale, so each marker on the y-axis is 10x the size of the previous. Game sizes have grown at a remarkably steady rate. An interesting point the author, Raph Koster, makes is that the cost per byte has plateaued in recent years (this was 2017, I couldn’t find an updated slide):
Net change in cost per byte hasn’t really come down since 2005, which is one profound reason why games are so much more expensive now than then. We create more bytes, and we create better bytes, but we aren’t really building cheaper bytes. While the discussion of how games have become more expensive over time may seem to have nothing to do with the question of total storage capacity on the Xbox Series X, both questions relate to how much storage the system needs in the first place, which impacts its overall price. The reason we’ve seen Microsoft sinking so much effort into optimizing every aspect of its content delivery network, I suspect, is partly to avoid the pinch pain of offering less storage capacity relative to game size than we’ve seen at launch in previous consoles.
Both the Xbox Series X and PS5 support add-on drives to increase their base capacities, which is good — I suspect both will need it.
China has been working to advance its crewed spaceflight plans with endeavors like the Tiangong space stations. An important aspect of China’s plan is its new and unnamed spacecraft, which just had a successful test. The vessel launched on the country’s new heavy-lift rocket, orbited the Earth, and landed safely in a Chinese desert.
The spacecraft, designed by the China Aerospace Science and Technology Corp, looks like a scaled-up version of the country’s current capsule and bears more than a passing resemblance to the SpaceX Dragon 2. However, China’s new crew capsule doesn’t have the fancy propulsive landing capabilities of the SpaceX design. NASA currently requires SpaceX to land with parachutes in the ocean, but the company hopes to use the SuperDraco engines for landing in the future.
After reaching orbit, the spacecraft spent the next several days raising its orbit with seven engine burns to reach a maximum altitude of about 4,970 miles (8,000 kilometers). The unnamed prototype uses a trio of parachutes to slow its descent — the smaller Shenzhou capsule currently in service has just one. The new version also has airbags that deploy to further soften the landing, which is another notable upgrade over the previous design. The vessel also carried 10 payloads for science and technology verification.
The Long March 5 rocket.
While the uncrewed prototype successfully demonstrated important new technologies, an even larger module is the eventual goal. The next-generation spacecraft it becomes will support up to six multiple human occupants for long-term space missions. A slightly smaller craft similar to the prototype is planned for low-Earth orbit missions.
The primary purpose of this launch was to test its new Long March 5B rocket, which is almost as powerful as the United Launch Alliance Delta IV booster. This should give China sufficient delta-v to send its new spacecraft to distant locations like the moon, but China also wants to use the Long March 5B to assemble a new modular space station in the next few years. The prototype module with extra propellant on board served as an analog of the anticipated 20-ton station modules.
Meanwhile, the US government is deciding when to end funding for the International Space Station, which has been the primary orbital research facility for the US and its partners for more than 20 years. There are preliminary plans to build the new Gateway station in lunar orbit, but China may be close behind.