Microsoft, Sony, and other companies still use illegal warranty-void-if-removed stickers

Microsoft, Sony, and other companies still use illegal warranty-void-if-removed stickers

One of the ways manufacturers coerce users not to modify or even open hardware they’ve purchased is through warranty-void-if-removed stickers. These stickers are common on electronics equipment — Microsoft uses them on the Xbox One, Sony has them on the PS4, and you’ve probably owned a phone that had at least one somewhere.

These stickers are almost certainly illegal, as Motherboard points out in relation to the new Xbox One S. The problem with the stickers is that they run afoul of the FCC’s rules on tying repair services to specific products. This issue is also probably why Apple agreed to change its practices regarding iPhones, when devices that had been repaired by third-party shops would then suddenly fail when upgraded to Apple’s latest operating system.

“The stickers could be deceptive by implying consumers can’t use parts the warrantor doesn’t pre-approve, which violates the anti-tying provisions of MMWA,” FTC spokesperson Frank Dorman told Vice.

PS4Feature

This practice isn’t remotely unique to Microsoft. The PS4 does the same thing.  Image by iFixit

Companies don’t like to talk about these policies, most likely because they don’t want to admit they’ve been doing something illegal for decades. Laws like the 1975 Magnuson–Moss Warranty Act were passed to prevent companies from tying customers to expensive repair contracts, or requiring customers to use only approved hardware installed by “authorized” resellers. The common example for this is with cars, where it’s illegal for a manufacturer to try and force you to only install their own parts.

There are, of course, limits to these laws. If you destroy your transmission or engine while servicing them, the manufacturer is under no obligation to repair the vehicle. What manufacturers aren’t allowed to do is refuse to honor a warranty on your engine just because you installed a different set of speakers or an aftermarket radio. The obligation is on the manufacturer to demonstrate that your third-party repairs or modifications caused the failure, not the other way around.

Modern electronics are tightly integrated, but the concept is the same. Microsoft isn’t allowed to prevent you from opening your own hardware, and neither is any other manufacturer. So why do they?

The answer is simple: Because they know you won’t do anything about it. It’s a nifty example of how companies get away with doing illegal things — the cost of taking them to court and forcing them to comply with the law is higher than the value of the product. A car is expensive enough to repair that companies can’t get away with telling you to pony up thousands of dollars for their own parts and repair shops. On the other hand, a smartphone can cost $500 to $700, but that doesn’t begin to cover the cost of a lawyer to litigate the issue, and Apple, Microsoft, and other companies know it.

In Microsoft’s case, its warranty states that it ceases to apply if the Xbox One is “opened, modified, or tampered with.” It’s flatly illegal. But until someone brings a case against the company and litigates it out, electronics companies will continue to put these stickers on their products, and consumers will continue to believe the manufacturers are legally allowed to do.

The situation is also playing out in new ways thanks to the advent of DRM. Tractor manufacturer John Deere and the Library of Congress have both resisted any attempt to require manufacturers to share data on firmware or other DRM’d blocks of information, because it could conceivably allow for piracy or alter the function of the vehicle. John Deere has gone so far as to claim that by purchasing a tractor, farmers gain “an implied license for the life of the vehicle to operate the vehicle.” It’s the concept of software licensing, except applied to hardware, and the fact that it’s illegal doesn’t seem to concern anyone much.

Source: ExtremeTech

New climate modeling shows Venus may have once been habitable

New climate modeling shows Venus may have once been habitable

It’s hard to imagine a less-hospitable location in the solar system than the surface of Venus. Humans can’t survive without spacesuits and life support systems anywhere besides Earth, but Mars, the Moon, and Europa present challenges we could probably meet with current technology. Venus’ atmosphere is 92 times thicker than our own — step outside the comfort of a hypothetical space craft, and you’d be crushed like the organic equivalent of a beer can. The question of how Venus, the planet most like Earth in size, gravity, and composition, ended up a toxic hellstew of sulfur dioxide with a runaway greenhouse effect has fascinated scientists for decades. Now, new research suggests that Venus might have been the first habitable place in our solar system — and it might have remained so for billions of years.

Our current models suggest that Venus and Earth formed from similar materials, which would strongly imply that the planet initially had substantial water reserves. The scientists in this report used computer modeling to simulate how Venus might have evolved if it began as an Earth-like planet with shallow oceans and an Earth-like atmosphere. Keep in mind that “Earth-like” refers to the conditions of the ancient Earth, not the markedly different ones we find ourselves inhabiting today.

Atmosphere_composition

The researchers found that Venus’ simulated rotation speed had a profound impact on how the climate of paleo-Venus evolved over time. Currently, Venus spins extremely slowly, with a year that’s actually significantly shorter than its day. When the climate models kept this slow spin, the temperatures on ancient Venus remained within habitable ranges for a substantial amount of time — up to 2 billion years.

Paleo-Venus

Speed up the rotation, however, and the situation goes south in a hurry. If the Venusian day is “just” 16x slower than our own, surface temperatures skyrocket in a hurry. One of the noteworthy characteristics of Venus is that its high-altitude wind speeds dwarf anything on Earth, with wind speeds up to 60x faster than the planet rotates. In hypothetical early Venus, with a slow rotation speed, the climate model predicts significant layers of cloud cover that would’ve shielded the young planet from the increased level of solar radiation it received relative to Earth. Speed up Venus’ rotation, and the weather patterns that dominate its atmospheric behavior change. As a result, surface temperatures rise markedly.

While our ability to estimate ancient Venusian climate is limited by our understanding of the planet and its evolution, results like this are interesting when considered through the lens of a Fermi Paradox solution we discussed earlier this year. One argument for why we’ve found no evidence of other life to date is that while the conditions for life to arise may be initially abundant, only a handful of planets manage to sustain life long enough for that life to begin reshaping its own biosphere on a global level. On Earth, events like the Great Oxygenation Event reshaped our entire atmosphere and, by extension, our entire biome. On Venus or Mars, even if life initially arose, it was unable to overcome other forces that were heating the planet and driving a runaway greenhouse effect (Venus), or cooling it, leading to the evaporation and sublimation of available water (Mars). Venus’ lacks plate tectonics but has been extensively reshaped by volcanism; these eruptions are thought to be partially responsible for the current climate and toxic hell-stew atmosphere.

Humans will likely never live on the surface of Venus; the environment is hilariously noxious to our own existence. The challenges Venusian terraformers would face make Mars look like a walk in the park, though there’s actually been some interesting proposals to create floating colonies in the upper layers of the Venusian atmosphere. Still, understanding how Venus’ atmosphere and characteristics evolved over time could help us focus our efforts to find stars with planets within their own habitable zones.

Source: ExtremeTech

Apple’s stagnant product lines mostly reflect the state of the computer industry

Apple’s stagnant product lines mostly reflect the state of the computer industry

Most of Apple’s product lines are severely overdue for a refresh. Apart from the recently refreshed MacBook, many of the company’s Mac products are well over a year old. The Mac Mini is nearly two, the high-end workstation Mac Pro is almost three, and the sole remaining non-Retina MacBook Pro is now more than four years old.

Writing for The Verge, Sam Byford recently argued that “Apple should stop selling four year-old computers.” He’s not wrong to note the 2012-era MacBook Pro is pretty long in the tooth with its 4GB of RAM and Ivy Bridge-based processor, or that Apple has neglected specific products, like the Mac Mini. I thought about writing a similar article last month, but with a specific focus on the Mac Pro. After digging around in Intel’s Ark (a tool that lets you compare the specifics of various Intel processors), I realized that while there are exceptions, Apple’s relatively lax refresh cycle is mostly driven by the low rate of improvements in PC hardware these days. Apple is just more honest about it.

Meet the new CPU, same as the old CPU

Setting aside the non-Retina MacBook Pro from 2012, most of Apple’s laptops are running on Broadwell or Skylake (the 2016 MacBook). There’s a single SKU left over from Haswell at the $1,999 price point, but the laptop lineup is pretty new.

Byford is right when he calls out the nearly three-year gap between the Mac Pro’s debut and the present day, but he doesn’t specifically discuss just how little this has meant to the machine’s top end performance. The Mac Pro ships in two configurations we’ll address: An Ivy Bridge Xeon quad-core at 3.7GHz with 10MB L3 (E5-1620 v2) or a 2.7GHz 12-core IVB-EP CPU with 30MB L3 (E5-2697 v2). Let’s compare those with their counterparts today.

Xeon-Comparison-Chart

In the chart above, we’ve arranged each IVB chip on the left, followed by its modern counterpart. Intel doesn’t have a 12-core Broadwell chip in the 135W TDP bracket, only in 105W and 160W flavors. There are higher-core count systems with lower clock speeds, but our hypothetical test-case is a user who wants both high clocks and high core counts.

The first thing to note is how little Intel’s chip lineup has actually changed in the past three years. The modern E5-2687 v4 has a slightly higher base clock speed but a significantly higher TDP. Top frequency is identical between the two. Broadwell offers essentially no clock speed improvements over IVB-E at the quad-core level — in fact, the IVB-EP actually clocked higher than its counterpart. True, architectural improvements will compensate for some of this, but not by much — Haswell was roughly 8% faster than IVB-E, and Skylake hasn’t come to the E5 family yet. You’d get some uplift if your application supports and makes significant use of AVX2, but otherwise? There’s not a lot of upgrade to be had.

In short, there’s just not much reason to update the Mac Pro’s CPU — not until and unless Intel can field designs that truly merit it. While Apple will likely eventually refresh the Mac Pro, the only big winners will be Mac users who want to pack as many threads as possible into a single-socket system (Intel now offers Xeons with up to 22 cores in the E5 family).

What about GPUs?

GPUs are where the lengthy wait times in between refresh cycles really does bite customers. The current top-end Mac Pro fields a pair of D700 graphics cards based on AMD’s original GCN 1.0 architecture. AMD has built multiple cards that could’ve been used to upgrade these configurations, while the Polaris GPU inside the RX 480 would deliver better performance and more VRAM at a much lower TDP and price point.

The problem with criticizing Apple’s GPU performance is that Apple doesn’t care all that much about graphics, period. OS X continues to field a version of OpenGL that’s nearly six years old and Apple isn’t supporting Vulkan, instead choosing to field its own close-to-metal API, Metal. Apple isn’t exactly out of step with the rest of the industry; outside of boutique laptops, there just aren’t very many systems shipping with discrete GPUs any more — at least, not many below the $1,000 price point, and not with decent graphics hardware. If your workloads depend on GPUs and scale with graphics horsepower, you aren’t using Apple. (There are plenty of workloads that run better on GPUs than CPUs, but don’t actually scale all that well, which is why I make that distinction).

For a brief moment in 2013, with the launch of the Mac Pro, it looked like Apple might embrace OpenCL, GPGPU programming and offload, and put a new focus on integrating high-end GPUs into its various products. That moment has come and gone. While I do suspect we’ll see Apple hardware with refreshed graphics hardware, it’ll be the 14nm refresh cycle that drives it, not any particular interest in GPU computing or graphics as a whole.

So… where’s that leave us?

I’m not an Apple apologist. I use an iPhone, granted, but I’m still back on the 5c and I plan to use it until the screen cracks or the battery dies. Posts like this inevitably ignite arguments over whether Apple devices are worth paying for, and whether a different manufacturer offers more value at a given price point. Spoiler alert: Oftentimes, they do, though you may have to do an infuriating amount of searching before finding a system you actually like.

It’s been a while since Apple updated its hardware, some of that hardware could be better than it is, and the net result would be systems that were at least a little sexier than they are today. But Apple has kept updating most of its laptop lines to take advantage of better battery life and performance improvements, while the performance of desktop CPUs has largely stagnated. Is it ignoring GPUs? Yes — but that’s completely par for Apple. The Mac Pro in 2013 was unusual precisely because it put GPU compute first and foremost. Apple’s decision to mostly ignore the segment afterwards might be unfortunate, but it’s scarcely surprising.

Apple has held off on making fundamental platform changes precisely because it’s been waiting for the underlying technology to advance enough to make the changes worthwhile. Given the frustration of sorting through hundreds of nearly identical laptops from multiple manufacturers every time a friend or family member asks for help choosing a laptop, I’m not sure I can blame them.

Source: ExtremeTech

AMD’s RX 470 GPU debuts with excellent performance for under $200

AMD’s RX 470 GPU debuts with excellent performance for under 0

Today, AMD formally launched its new RX 470, the slimmed-down little brother of the larger RX 480. Debuting at $179, the new GPU offers much of the performance of the RX 480 in a cheaper package. Reviews have come in from a number of sites with generally positive results, though there’s an interesting discrepancy I want to touch on.

First, the big-picture takeaway. The RX 470 is a 2,048:128:32 GPU configuration (that’s cores, texture units, and ROPs). It’s a modest step down from the RX 480’s 2,304:144:32 configuration, and the GPUs base and top frequencies have been trimmed as well, down to 926MHz and 1206MHz as compared with 1120MHz and 1266MHz respectively.

RX470-Chart

With 4GB of VRAM and an updated Polaris-class core, the RX 470 makes short work of AMD’s previous GPUs in this space as well as the Maxwell-based competition. It’s up to 2x faster than the GTX 960 in DirectX 12 and Vulkan, and maintains a healthy lead against Nvidia’s GPU in every case — even Project Cars, which tends to favor Nvidia. AMD’s GPUs in this price band were typically anchored by old, GCN 1.0 cards, which means RX 470 compares even better against them.

With that said, there’s some indication that which version of the RX 470 you buy could matter quite a bit. Most of the reviews out today are on the Asus Strix RX 470; AMD didn’t distribute its own reference cards for this review. THG’s investigation of that card found its performance to be in-line with expectations, but its $200 list price is extremely close to the RX 480. The Asus card also has some issues with hot spot formation on the board rather than on the GPU.

01-Asus-RX-470-Strix-Doom-Loop

Ow. Image by Tom’s Hardware

The voltage regulators are packed tightly and not cooled well and are almost certainly operating out of spec. AMD took heat for the reference design it fielded with RX 480, but this is Asus’ own custom PCB and design — not AMD’s. The GPU’s frequency bounces around a good deal and rarely hits the 1270MHz Asus officially rates it for, instead topping out around 1150MHz. Asus also limits their card to a six-pin power connector, while at least one company, MSI, shipped an eight-pin version.

There’s some early indication that either the additional power or better GPU cooling may have had an impact on comparison results. Eurogamer also reviewed the RX 470, but they used an MSI GPU with an eight-pin power connector. Even at stock frequencies, the gap they measured between the RX 470 and the RX 480 was significantly smaller than what Tom’s Hardware, Hot Hardware, Legit Reviews, and PC World all recorded. (HH, ET, and THG all received the Asus Strix GPU, Legit Reviews and PC World got an XFX variant, and Eurogamer has the only MSI review we’ve seen so far).

The table below summarizes the performance delta between the RX 480 and RX 470 as measured by Eurogamer and Tom’s Hardware Guide. All of the tests in question appear to have been run with the same detail levels and resolution (1080p) at both sites, and with the same API where applicable.

Compiled data from THG, Eurogamer.

Compiled data from THG, Eurogamer. “Delta” refers to the RX 480’s speed advantage.

This gap, and the substantial variance in AIB board pricing, seems to explain why different sites have significantly different opinions on the RX 470 itself. Eurogamer’s review claims that the RX 470 can offer 95%+ of the performance of the RX 480, PC World declared that the RX 470 is a “Great graphics card with a terrible price.” (the XFX RX 470 RS Black Edition that PC World reviewed has a $220 list price).

And speaking of price…

Before the RX 470 formally launched, word on the street was that the GPU would launch at $100 and $150. You’ll find these numbers widely mentioned online and they were part of our own reporting on the topic.

In retrospect, it’s not clear where this information came from. It’s not included in any formal deck that AMD sent to ExtremeTech, despite having been widely reported online last week. The $179 target price for the RX 470 (AMD calls this an SEP, or Suggested E-tail price) was only formalized a few days before reviews went up.

Whether or not the $179 SEP makes sense depends on whether your benchmark results look more like Eurogamer’s or more like THG’s. An RX 470 that offers 95% of RX 480 performance in a smaller power envelope and for $20 less is a winning deal; an RX 470 at 85% of RX 480 performance and $20 less isn’t as compelling. We’re investigating the situation and will report back once more information is available.

A huge step forward for budget gaming

AMD’s successful RX 480 launch was tempered by Nvidia’s decision to return fire with the GTX 1060 (albeit at a higher price point) and the power issues that initially hit the reference cards. Neither is an issue here. While there are some questions about specific GPU models, all reviews are in agreement — the RX 470 is a huge performance upgrade over any Nvidia or AMD GPU competing in this price band. The only exception is the RX 480, and that GPU launched just last month.

While there are some questions about just how much performance the RX 470 “should” offer compared with the RX 480, the general shape of the comparison is clear. At $179, the RX 470’s price/performance ratio is at least scaled well against the RX 480. The GPU that costs slightly more also performs slightly better. No problem there. Some RX 470 SKUs may be slightly better deals than others, but there’s no bad results to speak of. The only question is, will the RX 470 and RX 480’s prices settle back down and move towards where they ought to be?

Source: ExtremeTech

Intel recalls Basis Peak fitness trackers due to fire hazard, kills product line

Intel recalls Basis Peak fitness trackers due to fire hazard, kills product line

As more and more companies take on the Internet of Things, it’s a given that plenty of devices will have early issues and problems. Even so, the complete recall and cancellation of the Basis Peak is noteworthy — especially given all the money and effort Intel has been pumping into its own Internet of Things business.

Basis is a business funded by Intel Capital with a focus on designing and building fitness trackers. The company and its products are described as follows:

The Basis band has the most advanced sensors on the market, continuously capturing heart rate patterns, motion, perspiration and skin temperature throughout the day and night. You can then find opportunities in your daily routine to insert take-charge-of-your-health actions, like getting more activity and regular sleep. Simply choose the area you want to focus on and Basis does the rest. Basis automatically captures activity and sleep with no button pushing or mode setting required.

Perhaps Basis should have included an external temperature sensor?

Several months ago, Basis recommended that some 0.2% of its fitness tracker owners had experienced overheating from the device. As a result, it recommended that Basis owners cease using the watch until a software update was available to address reports of burning or blistering in a small number of devices. It was apparently impossible to resolve the problem, which led to the total recall discussed here.

Basis-Peak-Watch

While it might be tempting to pin this issue on Intel’s general failure to win much market share for its IoT products, the Basis Peak is, to the best of our knowledge, built around a Cortex-M CPU, not an Intel chip. Another odd thing is that the Basis Peak has been on the market for 18 months and was generally well-reviewed when it shipped. This could lend credence to Basis’ own claims that the issue only affects 0.2% of smart watches — with that low of an incidence rate, the company may well have needed time to identify the cause. One of the problems in science and data gathering is that very low probability events are extremely difficult to distinguish from background noise.

The Basis Peak won accolades when it launched for being the first accurate heart rate and sleep monitoring fitness tracker, though its sales were never very high. It’s not clear if Basis will continue building products after this, but the damage to Intel’s own IoT efforts is PR, not technical.

Details on the recall and additional information for affected customers can be found here.

Source: ExtremeTech

PSA: Windows 7 and 8 keys apparently still work for free Windows 10 upgrades

PSA: Windows 7 and 8 keys apparently still work for free Windows 10 upgrades

Heard the news? Microsoft has stopped offering Windows 10 as a free upgrade after 12 months of begging aggressive questioning. Except that doesn’t seem to quite be the case.

There are at least two ways to still get Windows 10 for free, provided you have an authentic Windows 7 or 8.1 license. First, the Windows 10 upgrade offer hasn’t actually expired if you use assistive technologies like screen readers or on-screen magnifying glasses, and the only authentication the system performs is a button that says “Yes, I use these technologies.” In short, anyone who wants Windows 10 can still get it that way.

Over at ZDNet, however, Mary-Jo Foley has found an alternate-alternate method of downloading Microsoft’s OS that doesn’t require you to lie about your need for a seeing-eye dog, on-screen keyboard, or color blindness. It seems you can still activate Windows 7 and 8.1 keys during the upgrade process. A Microsoft spokesperson provided the following statement to ZDNet:

Users upgrading their PC for the first time will need to enter a Windows 10 product key. Users who’ve previously installed Windows 10 on their PC should activate successfully with a digital entitlement when reinstalling Windows 10 on that PC.

Note that the upgrade process referred to in the original story is specifically for system keys that were never upgraded or authorized for Windows 10 during the past 12 months. This statement is, at best, inaccurate — though it’s also not surprising a PR person might not know why a program that was widely advertised as ending on July 29 hadn’t actually ended yet.

If we had to guess, we’d guess that Microsoft hasn’t pulled the switch just yet because it wants to give people a little more time. The alternative is that Microsoft made a very public show of ending the Windows 10 upgrade offer and will pull the GWX.exe tool from systems in a future Windows Update — but doesn’t actually intend to stop giving the OS away. It’ll just require a little bit more elbow grease to download as time goes by, or a willingness to claim a disability you may not actually possess.

On a personal note: When I upgraded my own Windows 7 rig to Windows 10, I ran into a problem with my display driver. The upgrade assistant kept insisting that my display was incompatible with Windows 10. Since I’m running a GTX 970 with the latest Nvidia drivers, I knew that wasn’t true, but couldn’t lock down the problem. I wound up running the installer manually and handled the upgrade that way.

FuseHydrogen

I wanted to beat this message with a crowbar until it fused hydrogen.

It turns out that this problem was caused by an old, old driver I’d installed years ago, while working with the TightVNC client for accessing testbeds. I still use TightVNC on occasion, mostly because I need a VNC solution that allows for file transfers and doesn’t install or emulate a GPU driver. (Microsoft’s RDP is roughly 100x faster, but doesn’t always play nice with GPU drivers when testing 3D applications. If you have a client you’d recommend, let me know.) Some years ago, there was a GPU driver you could install to improve performance in the application, and I’d installed it. This, it turns out, is what Windows 10 was picking up and failing to understand. Once I removed that driver, I had no problem upgrading to the Anniversary Update for Windows 10.

I’d still recommend that users at least claim their free copy of Windows 10 before MS decides to pull its various plugs, but don’t see a problem with sticking with whatever version of the OS you prefer.

Source: ExtremeTech

Google cuts Chrome power consumption, but Microsoft still thinks you should use Edge

Google cuts Chrome power consumption, but Microsoft still thinks you should use Edge

Over the past few months, Microsoft has taken potshots at Google for high power consumption under Chrome. As we’ve discussed, there’s truth to the company’s statements. Independent tests have regularly demonstrated that Chrome uses more power than Microsoft Edge, Internet Explorer, Firefox, or Opera, though the exact ratios and scores depend on workload and use-case.

Google announced several changes to Chrome 52 for Android this week, with faster video load times, less buffering, and better overall power consumption management. A short demonstration video is embedded below:

While these benefits are currently Android-specific, Google isn’t going to keep them there. “Since the beginning of the year, we’ve made a 33% improvement in video playback GPU/CPU power consumption on Windows 10,” a Google spokesperson told The Verge. “And by Chrome 53, we feel confident that we’ll be at parity with other browsers in terms of power consumption for the majority of video playback on the internet.”

Meanwhile, at the Hall of Justice Redmond campus…

Microsoft, it seems, has gone beyond just issuing blog posts to call out its own browser’s superiority. First there’s this tweet from analyst Patrick Moorhead, of Moor Insights and Strategy.

MoorheadTweet

Since upgrading to Windows 10‘s Anniversary Update, I’m also being prompted with this on my own system:

Edge-Prompt

On the one hand, it’s nice to see Microsoft respecting system defaults after an upgrade, since this kind of behavior got it in hot water with other companies last year. In the past, Microsoft has changed consumer preferences when installing upgrades or updates.

On the other hand, this kind of insistent nagging is what’s turned off plenty of people to Windows 10 in the first place. While I’m willing to give Edge another shot, post-Anniversary Update, my experience with the browser has not been positive. It’s not just a question of extensions, which Edge finally supports now — I’ve had real problems with the browser refusing to close multiple tabs and lagging on a number of sites where other browsers have no issue. It’s not just a question of adblock, either; Edge has been problematic for me across multiple testbeds, even with adblock disabled on other browsers.

When Microsoft announced its intent to turn Windows into an OS-as-a-service model, plenty of people predicted that the company would either start charging a monthly subscription fee (it hasn’t), or would launch attacks against other major distribution platforms (it also hasn’t). But there’s another, more subtle impact to that mindset that I don’t think Microsoft has even considered. When your software is sold as a service, it’s never really finished. This, in turn, means Microsoft’s various internal departments (shown below) are in a never-ending tussle to attract user eyeballs and attention.

microsoft-org-chart

Image credit: bonkersworld.net

In the old days, MS released a new OS with a bevy of new features, marketed the hell out of it as a one-time deal, and then moved on. Now, various applications receive ongoing updates — and all of those applications need to justify their own existence with improved user engagement, satisfaction ratings, and increased time spent in-app. Instead of nagging you to try something once, MS has an ongoing incentive to keep nagging you. And, of course, the new tracking in Windows 10’s telemetry lets it measure whether or not said nagging is effective in a way that previous versions of Windows didn’t.

Welcome to the future. Somehow I never imagined it would be this annoying.

Source: ExtremeTech

Nvidia Titan X offers incredible performance, at a significant cost

Nvidia Titan X offers incredible performance, at a significant cost

When Nvidia launched the GTX 1080 earlier this year, it didn’t take long before rumors of a new Titan-branded card followed in its wake. Unlike the GP100, which will utilize HBM2 and launch in early 2017, the Nvidia (not GeForce) Titan X is based on a different piece of silicon: GP102. This new top-end card combines a wider memory bus and higher core counts with the same GDDR5X that debuted with the GTX 1080, and it offers significantly higher performance — if you can stomach the price.

Multiple reviews of the card are up at Hot Hardware, PC Perspective, and Tom’s Hardware Guide. The Titan X packs 3,584 cores, 224 texture units, and 96 ROPS, with a base clock of 1417MHz and a boost clock of 1531MHz. That’s 40% more cores and texture units than the GTX 1080 and 50% more ROPs, though these gains are offset by decreased clock speeds, which are roughly 12% lower than the GTX 1080. Memory bandwidth is up to 480GB/s, and the TDP is just 250W — impressive for a card with specs like these.

TitanX

Here’s the bottom line: This GPU is statistically a monster, and it performs like one. THG notes: “If you were of the mind that one GP104 couldn’t handle 3840×2160, Titan X could be your solution.” AMD’s Fury X has already been generally outclassed by the GTX 1070 and 1080, but fans of Team Red could at least point to a handful of DirectX 12 titles where the Fury family competed well against Nvidia’s latest GPUs. That’s gone at the top end — even when the Fury X manages to trade shots with the GTX 1080, Nvidia’s Titan X obliterates them both by 20-40%. This is the first GPU that can drive 4K at high frame rates — but boy, it comes at a price.

If you have to ask how much it costs…

More than two months after launch, the GTX 1080 is functionally a $650 – $750 GPU (expected MSRP is $599, but good luck finding one at that price). Now, Nvidia has launched an even higher-end SKU, available only from its own website. The Nvidia Titan X is absolutely the fastest and most powerful GPU on the market, but at $1,200, it’s not any kind of deal. Its price/performance ratio is poor by any measure — you’re paying nearly 2x as much money for a top-end performance increase of 40%.

The flip side to this, however, is that cards like the Titan X aren’t made for people who have to ask how much the GPU costs — they’re designed for people who can afford to throw insane amounts of money at a product. If you want the absolute best GPU performance money can buy, the Nvidia Titan X is the best GPU on the market. Keep in mind, however, that these ultra-expensive GPUs don’t hold their value particularly well, and Nvidia has a history of cutting them off at the knees once it needs to respond to a competitive imbalance.

A $1,200 Titan X doesn’t just give Nvidia an enviable pole position, it also establishes a huge amount of leeway in the company’s price structure. Let’s say AMD’s top-end Vega is faster than the GTX 1080 and comes in at $700. Nvidia can easily address that with a future GTX 1080 Ti for, say, $750 — then trim prices on the GTX 1080 if it needs to. Last year, the GTX 980 Ti offered 95%+ of the GTX Titan X’s performance for hundreds of dollars less. There’s nothing stopping Nvidia from doing the same thing this year, and the company has explicitly left itself room to maneuver.

One last point, while we’re on the topic. With the Titan X, Nvidia has unambiguously seized the performance crown in every benchmark, period, whether in DX12 or DX11. But this was always going to happen, simply as a result of how AMD and Nvidia chose to launch their respective hardware. Nvidia shot for the high-end first, AMD chose to refresh its entry-level and midrange products. As a consequence of that, NV’s 14nm hardware completely dominates the old 28nm cards from AMD.

Given that Vega is supposed to be a completely new architecture compared with GCN, it’s pointless to speculate on whether or not AMD can or can’t close the gap. It’s not as simple as just predicting what Fat Polaris might look like, or what kind of performance it could offer. There are other factors to consider as well, like AMD’s use of HBM2 as compared to GDDR5X. The bottom line is, everyone knew that delaying its high-end hardware would leave AMD unable to compete against Nvidia at the top-end of the market — including AMD.

If you don’t like the idea of buying a $1,200 GPU in August that might be outclassed by a $750 GPU come December, don’t invest in Titan-class cards. But if you want the fastest GPU money can buy now, period, the Nvidia Titan X has no competition.

Now read: How to buy the right video card for your gaming PC

Source: ExtremeTech

The last woolly mammoths in North America didn’t starve — they died of thirst

The last woolly mammoths in North America didn’t starve — they died of thirst

Of all the prehistoric megafauna on Earth, few have captured the imagination as thoroughly as the woolly mammoth. Scientists have researched the feasibility of cloning mammoths for decades, but knowing how and why they died out would tell us a great deal about the feasibility of restoring the species today. New research on the last-surviving mammoth population in North America has shown that this particular group probably didn’t die as the result of human hunting or a loss of food.

Woolly mammoths generally went extinct between 10,000 and 14,000 years ago, along with the majority of the Pleistocene megafauna. There are, however, two known exceptions. Mammoths persisted on two islands: Wrangel Island, a Russian island in the Arctic Ocean, and Saint Paul Island, off the Alaskan coast. The latter is the last-known location where mammoths survived in North America (3600 BC), while the Wrangel population lived until roughly 2000 BC.

lbridge

The Beringia land bridge. Image by NOAA.

The two major reasons for why megafauna like the mammoth went extinct are thought to be climate change and human predation. As the climate warmed, humans expanded into new territories that were formerly blocked by ice or too harsh to sustain life on an ongoing basis. The populations on Saint Paul and Wrangel survived as long as they did partly because they were isolated from humans and weren’t hunted for food.

One possible explanation for the Saint Paul mammoths’ eventual extinction would be the glacial melt that created the island in the first place. The GIF above shows how the oceans rose, turning Saint Paul into an island and trapping a group of mammoths in the process. Despite being stuck on a comparatively tiny rock, the mammoths survived for thousands of years — long after the island’s modern shorelines were established. Glacial melt might have isolated the population, but it’s not what killed them off.

The research team collected mammoth remains from a cave on St. Paul and took sediment samples from a nearby lake. They then analyzed the sediment samples looking for the spores of fungi that live on the island and preferentially reproduce in animal dung. Elephants are famous for producing mammoth amounts of dung and the sediment samples reflected this up to about 5,600 years ago. Other analyses of the sediment cores showed that vegetation and plant life on the island had remained constant over time as well — the mammoths didn’t die due to a lack of food, either.

Thirst, not hunger, is thought to have doomed the mammoth population. The team writes:

Instead, the extinction coincided with declining freshwater resources and drier climates between 7,850 and 5,600 y ago, as inferred from sedimentary magnetic susceptibility, oxygen isotopes, and diatom and cladoceran assemblages in a sediment core from a freshwater lake on the island, and stable nitrogen isotopes from mammoth remains. Contrary to other extinction models for the St. Paul mammoth population, this evidence indicates that this mammoth population died out because of the synergistic effects of shrinking island area and freshwater scarcity caused by rising sea levels and regional climate change.

Saint Paul island lacks any spring or source for fresh water, which means there was no way to restore its supply. As the climate dried, the amount of water available to the mammoths would have dwindled, while rising sea levels allowed salt water to penetrate the soil from below. The research team conducted a comprehensive analysis on the diatom fossils present within the sediment cores and found evidence that the types of diatoms in the water had changed dramatically over time. Older core samples showed evidence of diatoms — single-celled algae — that preferred freshwater and a depth of several meters. This type of diatom was plentiful in core samples dated to ~5800 BC and became much less common thereafter before vanishing altogether. The diatoms that replaced it are from species that thrive in shallower waters with a higher concentration of salt.

In short, the mammoths died out at a time when the island still had enough plants to feed them and space for them to live on, but when the quality and amount of water had precipitously declined. It’s because of this that the research team believes thirst ultimately killed the mammoth population. It’s also a reminder that climate change can damage ecosystems without inundating an area. Freshwater contaminated by saltwater seeping in from the ocean can kill plants and effectively poison animals, leading to dramatic ecological changes in a relatively short period of time.

Source: ExtremeTech

Backblaze vets 8TB drives, releases updated hard drive reliability data

Backblaze vets 8TB drives, releases updated hard drive reliability data

Backblaze has released its quarterly details on hard drive reliability, including new information on its initial deployment of 8TB hard drives. While SSDs have made marked inroads into the hard drive market thanks to a rapidly diminishing cost-per-bit, hard drives still reign supreme as the most cost-effective method of storing data.

Backblaze kicked off its 8TB migration by deploying more than 2,700 Seagate HDDs. The company migrated an estimated 6.5PB of data from a set of Storage Pods built with 2TB HGST hard drives to a set of Seagate 8TB drives, quadrupling the amount of storage available per-pod. For those of you curious about how much data Backblaze stores in total, the company has released a chart showing its own capacity growth rate over the past four years.

blog-q2-2016-petabytes-managed

Early reliability data on the new drives is good, without much sign of a bathtub curve (an early period of time during which drives initially fail). Most of the drives have minimal failure rates, though there are a few cases where the gap between the low and high confidence interval is particularly large (these seem to indicate cases where Backblaze has either only recently deployed drives or has had a small number of failures in a small pool). As the 8TB drives get more use these figures should settle down. The annual failure rate of 2% across all drive families is excellent.

blog-q2-2016-cumulative-rates

Backblaze offers the following explanation for how it calculates its annualized failure rates.

Some people question the usefulness of the cumulative Annualized Failure Rate. This is usually based on the idea that drives entering or leaving during the cumulative period skew the results because they are not there for the entire period. This is one of the reasons we compute the Annualized Failure Rate using “Drive Days”. A Drive Day is only recorded if the drive is present in the system. For example, if a drive is installed on July 1st and fails on August 31st, it adds 62 drive days and 1 drive failure to the overall results. A drive can be removed from the system because it fails or perhaps it is removed from service after a migration like the 2TB HGST drives we’ve covered earlier. In either case, the drive stops adding Drive Days to the total, allowing us to compute an Annualized Failure Rate over the cumulative period based on what each of the drives contributed during that period.

Seagate continues to be Backblaze’s dominant supplier, because (and this is according to Backblaze) neither Toshiba or Western Digital is particularly interested in selling the company hard drives. This seems rather unlikely given that Toshiba and WD are in the hard drive-selling business, and may have more to do with price competitiveness. Annualized failure rates for HGST drives continue to be lower than any of the products from Toshiba, Seagate, or Western Digital, but the lower cost of Seagate hardware apparently keeps them in the driver’s seat.

Earlier this year, Backblaze released its first cumulative report on hard drive failures after logging one billion hours of drive data. As always, data presented here should be treated as indicative of drive failure rates in particular workloads and scenarios. The Backblaze data set is by far the best and most thorough data available online on how HDDs perform in the real world — but no one, including Backblaze, argues that its data is representative of all drives in all workloads, or that it can be perfectly extrapolated to other uses. Failure rates can and will vary by workload — and a certain amount of luck.

Now read: Who makes the most reliable hard drives?

Source: ExtremeTech