Shared posts

11 May 17:26

Fixstars Announces World’s First 6TB 2.5-Inch SATA SSD

by Pulkit Chandna

Fixstars SSD-6000MSSD-6000M to begin shipping in July

Japanese firm Fixstars may not be a familiar name in the storage space but it is doing everything it possibly can to change that.  In February, it introduced the 3TB SSD-3000M in the States, claiming it to be the world’s highest-capacity 2.5” SATA SSD (solid-state drive). But the SSD-3000M is no longer the highest capacity drive in the company’s own lineup, let alone in the world. That honor now belongs to the SSD-6000M.

As you may have guessed from the name, the SSD-6000M is a 6TB monster. Like the 1TB 1000M and the 3TB 3000M, it too is a 2.5-inch SATA III offering that uses the company’s proprietary SSD controller. However, unlike both those drives, it uses 15nm MLC (multi-level cell) NAND and not 19nm MLC NAND. It is said to be capable of sequential read and write speeds of 540MB/s and 520MB/s, respectively.

“The unparalleled performance of our previous model’s (The SSD-3000M) sequential I/O helped propel our SSDs and garner lots of attention. Since many of our customers desire even greater capacity, I am excited to offer a new solution and grow the product line with the inclusion of the larger SSD-6000M,” Fixstars CEO Satoshi Miki was quoted as saying in a press release.“Since our SSD’s capacity is now able to compete with high-end hard drives, we feel our product can draw the attention of data centers as well.”

While you can order one now, the price of the drive is only available on request. Judging by the price of the 1TB variant (approximately $820), it is safe to say the 6000M won’t be cheap.

Follow Pulkit on Google+

08 May 16:08

Dell XPS 13 Review

by Jimmy Thang

The new standard for Ultrabooks

Dell’s XPS 13 line of laptops has been around for many years now, and while the company’s 2015 model doesn’t try to reinvent the wheel, Dell has made enough improvements to it to claim that it has the best Ultrabook on the market. We’ll see about that…

The compact chassis gives the notebook some sex appeal. Measuring 11.9x8x.5 inches and weighing just two pounds, 12.4 ounces, it’s extremely portable. While a chassis this small usually sports an 11-inch panel, Dell impressed us with its finesse in squeezing in a 13.3-inch 10-point touch display. This is done through what Dell calls its “Infinity Edge” display. That’s marketing jargon for what amounts to really thin, flush bezels that measure 5.2mm across. 

dell xps 13

While the panel isn’t huge, the resolution is enormous at 3200x1800. Certain Windows programs might look small but, surprisingly, it looks fine for most applications. Some users may also be put off by the glossy panel, but the viewing angles on this IPS display and the vibrant colors it offers are undeniably fantastic.

The body shape puts it in the MacBook Air category of laptops, and while it even sports a downward slope that’s similar to Apple’s popular notebook, it’s got its own thing going aesthetically. The XPS 13 is mainly composed of machined aluminum, and has a carbon fiber palm rest. Most of the notebook is black, with the exception of its silver back; here you’ll also find the Dell logo etched in. It would be nice if the logo had a lighting option (all the cool kids are doing it), but maybe Dell designers thought it would be best to go minimal this time around. Luckily, it does have a backlit keyboard, which is more important. Because the laptop is small, some people may be bothered by the compact keyboard, but it didn’t bother us. We weren’t too crazy about the trackpad, however, which offers a little unwanted acceleration. The notebook’s speakers have a surprising amount of firepower, but we’re a little let down that they’re side-firing as opposed to front-facing. In terms of ports, you’ve got your charging port, two USB 3.0 ports, mini DisplayPort, headphone jack, and SD card slot. We wish the laptop offered Mini-HDMI, but it’s got all the bare essentials otherwise, and offers more ports than other leading notebooks (ahem, Apple, we’re looking at you). 

When it came to performance, our Dell XPS 13 and its 2.2GHz Core i5-5200U processor didn’t blow our socks off. It performed better than our Acer S7’s Ivy Bridge processor in our Stitch CPU benchmark by eight percent, but lagged three percent behind in our multithread-heavy x264 test. It’s worth noting that this isn’t an apples-to-apples tier comparison, however. Our zero point uses a Core i7-3517U processor that has a max turbo frequency of 3GHz, which trumps the i5’s 2.7GHz top speed. Also, when our S7 zero point goes full blast, it’s almost leaf-blower loud. The XPS 13, on the other hand, plays it cool and quiet. 

In graphics territory, the XPS 13 was gave our zero point a thorough thrashing. Intel has been making very noticeable improvements when it comes to integrated graphics; the fruit of its labors means a performance delta between 40–70 percent here. If you’re thinking this means you’ll be able to do some hardcore gaming on this thing, however, you’ll want to think again. Even with everything turned low, we got unplayable framerates (mid-20s) running Tomb Raider at 1080p. This proves that gaming laptops aren’t getting pushed off the table by lighter fare anytime soon. 

There were a few other areas where the laptop failed to impress, battery life being chief among them. In our video-rundown test, the XPS 13 lasted 223 minutes, which is fine for even the lengthiest of Peter Jackson movie edits, but if you’re hoping for something that will last an entire workday, this particular model isn’t for you. Dell says its 1080p configuration will last longer, since the laptop doesn’t have to push as many pixels, but as we don’t have that unit to test, we can’t validate the claim. Another gripe we had with the XPS 13 is that the camera is located on the lower left-hand corner of the monitor, as opposed to the top-middle, where most laptops place it. This placement sort of validates the old saying that the camera adds 15 pounds, at least that’s the excuse we’re going with in our case. Finally, the laptop does get a little warm underneath, but it won’t singe your lap.

These knocks against the XPS 13 prevent us from awarding it a Kick Ass score, but quibbles aside, this is still a great Ultrabook. Its 12.9-second boot-up time is among the fastest we’ve ever seen, and the laptop is ultimately quite sleek, incredibly portable, and priced fairly. It’s not perfect, but it may very well be the best Ultrabook out there at the moment. 

Price, $1,400; www.dell.com

Dell XPS 13

08 May 16:08

Build It: Putting Together a Haswell-E Box

by The Maximum PC Staff

It may not seem it at first, but Intel’s Haswell-E “Lite” is a sweet deal and a solid foundation for a build

Length of Time 2-4 hours | Level of Difficulty: Medium

The Mission

We've found there are really two kinds of PC builders today. Those who are content with Intel’s “consumer” LGA1150 socket, and those who always go for the big-boy socket. Yes, these are the folks who opted for an LGA1366 instead of LGA1155, and when faced with LGA2011 or LGA1150, they went for the larger pin- and core-count again. For you, the LGA2011-V3 and Haswell-E means it’s time get building with Intel’s “bargain” Core i7-5820K part. This $390 chip gets you six cores without paying the $560 premium pricing Intel applied to its previous six-core processor.

With DDR4, a new chipset, and a new socket, it’s time to run you through the steps and know-how you need to build a new Haswell-E rig. And what do you know, we just built and overclocked one for this month’s cover story, so we decided to show you how we did it.

Not a Budget Box

This is a six-core, fire-breathing enthusiast machine meant to keep your computing needs satiated for the next 36 to 48 months. And just as you don’t buy a Mercedes with manual-crank windows and AM radio, we didn’t skimp here. We outfitted this rig with very nice components befitting its six-core royalty. Yes, there are ways to cut costs. If we really wanted to budget-out this baby, we could have used a cheaper case, cut the SSD size, and gone with RAM without a heat spreader. We could also notch down the GPU and ditch the optical drive, too. At that point, you might as well have built a Pentium K box.

So, we’ve established our intention to build a nicely configured machine, but here’s the thinking behind a few of our choices. Of all the parts here, spending $60 on an optical drive may seem extravagant, but as it may be the last optical drive you own, it might as well do everything, including burning BDXL discs. The other part sure to get attention is the 500GB Samsung 840 EVO drive. With a 3TB HDD in the box, do we really need to spend that much on the SSD? Yes. With the prices today, we believe the 500GB mark is the new “normal” for primary storage.

1. LGA2011-V3 What?

E-Box step 1

Here's the PSA for folks who don’t follow current events: LGA2011-V3 is not, repeat, NOT, compatible nor the same as the previous LGA2011 socket. You may snicker, but there will be people buying mobos and CPUs that are incompatible and not realize it until they’ve trashed the motherboard. Installing the LGA2011-V3 part isn’t all that different from its predecessor. Unlatch both arms on the socket, remove the plastic cap, and line up the chip by referencing the small triangle in the corner of the CPU with the triangle on the socket or motherboard. The notches in the CPU should also line up. Now, lower the CPU into the socket without sliding it around or touching the pins in the socket, and latch it in place. Don’t throw away the plastic cap. You’ll need it if you have to return the board for service.

2. Because It's One Better

E-Box step 2

You don't have to put RAM in with the motherboard outside of the case, it’s just a lot easier. DDR4 is the new hotness and offers eventual higher capacity and higher speeds, too. As Haswell-E features a quad-channel memory controller, you should consult your motherboard manual on which slots to put your four DIMMs in. On boards that feature latchless connectors on one side, you need to ensure the RAM is fully seated; we’ve botched installs by not fully seating the RAM. There is a slight curve in DDR4 modules to aid seating it, but you should still make sure RAM is fully seated—it’s the typical rookie builders mistake. You should also now install the back plate that comes with the cooler, too.

3. Put In Your Foundation

E-Box step 3

You’re now ready to install the board. First, install the I/O shield in the case firmly. Now, count how many mounts are in the case and make sure they are properly torqued down and cannot unscrew. The mounts should match the number of holes in your motherboard. Screw down the motherboard and begin connecting fans to available fan headers and the like. With this motherboard in particular, plug the wire from the I/O shield into the nearby header for it. This powers the nifty blue and red LEDs integrated into the I/O shield. If it bugs you, you can control this LED from within the Gigabyte utilities in the OS, which let you turn it off, turn it on, or even have it blink. You should also plug in the rest of the umbilicals you need, such as USB 3.0 and frontpanel controls. You should install your PSU now, too.

4. SATA Confusion

E-Box step 4

One of the most common mistakes is to use the wrong SATA ports. Unlike that gimpy X79 chipset that only had two native SATA 6Gb/s ports, X99 has all SATA 6Gb/s ports. Still, rooks will plug into a third-party SATA port that isn’t as fast or isn’t even on. We prefer to run off of the native chipset ports first because they generally perform the best and there’s no need for drivers for most OSes. On this board, there’s also a SATA Express port that shares the SATA ports. Don’t worry about it. We haven’t seen a SATA Express device yet, and storage vendors seem pretty blasé, so for now, don’t sweat it. As with RAM, we recommend basic RTFM protocol before plugging in any drive. We also recommend installing the GPU after plugging in and running the SATA cables, as the card will block access to the ports.

 


 

5. Yolanda, What's Fonzie Like?

E-Box step 5

To cool this rig, we recommend mounting the radiator first so as not to put undue pressure on the hoses and motherboard. To do that, though, you’ll need to put the cooler in the top of the case and hold it in place with one hand while screwing down one of the corners. Keep the rest of the screws handy, too. Once you have one corner of the rad in place, put a screw in the opposite corner from the first point you mounted it. Only after you have done this can you remove your hand from the radiator. Now, just mount the pump and cold plate to the CPU and plug it into power. As this cooler is “smart,” you’ll need to plug it into an available USB header and download software from NZXT’s website for it, too.

6. Bezel Removal

E-Box step 6

This normally wouldn’t be a step we would bother mentioning, but just this once, we wanted to let those of you who are running optical drives in your Obsidian 750Ds know how to remove the bezel. If you’re like us and can’t read the manual, you might be tempted to pry off the front of the case, but hold back a sec as it’s actually easier than that. The bezels are held in place rather tightly but can be removed by reaching inside the case and squeezing the two arms that secure it while pushing it out. It may seem like overkill but it’s better than bezels that pop out every time you look at them too hard. With it out, just slide in your ODD. Also add your SSD and HDD and plug them into power and data.

7. Overclock It

E-Box overclock

With this CPU and a nice liquid cooler, there’s simply no reason not to overclock it. To start our overclock, we first consulted the Internets to get an idea of what others were getting from their retail CPUs and what methods they were using. Haswell-E has an additional “gear” or “strap” that allows you to run the b-clock at higher than a standard Haswell chip’s 100MHz, such as 125MHz. We, unfortunately, didn’t get very far with that method and so, with the clock ticking on our build, we fell back on a basic Turbo Boost overclock. To do that, we went into the Advanced CPU Core settings in the UEFI and manually set the turbo ratios based on the core loads. To hit our overclocks, we also increased the core voltage to 1.296 volts.

8. Validate!

E-box validate

Validation of an overclock’s stability is always an open debate. Is running your favorite game enough? We initially reached for our old standby: Prime95. We know board makers don’t recommend it as it’s “too stressful,” but, for a quick test that gets you into the ballpark, we rely on it. However, it was too stressful on Haswell-E within a few minutes. Still needing to vet our settings, we used AIDA64 Extreme set to FPU mode and completed multiple runs. We’ll note a recommended AIDA64 by some is 12 hours. When you’re looking at 100MHz increments that means days of testing. We instead erred on the side of caution. We think Haswell-E can go further, but we wanted to be realistic about it, too.

Intel's "Lite" Is a Heavyweight Contender

First, let’s point out something. The predecessor of the six-core Core i7-5820K is the four-core Core i7-4820K, which cost $325. When you factor in that you’re getting the same core count of the $580 Core i7-4930K, this Haswell-E “Lite” is looking pretty damned good. To find out just how good, we took our zero-point system with its Sandy Bridge Core i7-3930K CPU overclocked full-time to 3.8GHz (on air) and its GeForce GTX 690 and put it against this month’s build.

The result was pretty impressive. Remember, this is not a six-core chip wailing on a fourcore part—the Core i7-3930K is a six-core chip that set you back $580 in its heyday and is no slouch in multi-threaded tasks. In the singlethreaded tasks we saw the Build It with a 17 percent and 22 percent advantage. In multithread tasks the margin opens up further with a 27 percent advantage in Premiere Pro CS6 and a 29 percent advantage in the x264 HD 5.0 encoding test.

Part of that is from the clock-speed advantage. We are pushing the Build It PC to 4.2 to 4.6GHz depending on loads, which is around a 14 percent clock-speed advantage. The rest of the difference comes from the greater efficiency of the Haswell cores versus the older Sandy Bridge cores.

In gaming, the single GeForce GTX 690 is a dual-GPU card that’s basically the equivalent of two GeForce GTX 680 cards in action. You can view this two ways: Man, that GeForce GTX 980 is fast if it can basically nip at the heels of a dual-GPU card that’s still recognized as one of the fastest cards in town. Or, of course, you can also say: Man, that GeForce GTX 690 still packs some heat if it’s faster than the GeForce GTX 980 card.

The real question is: Is it worth an upgrade? That depends on your needs. If cutting your multi-hour encodes down by a third is money saved and you want a modern chipset in X99, then yes. For many people who put down nearly $600 for a six-core Sandy Bridge part, though, that is still a viable part.

07 May 17:21

The US army wants invisibility cloaks within 18 months

by Duncan Geere
The US army wants invisibility cloaks within 18 months

The US Army has put out a call for proposals from companies for wearable camouflage that has the ability to shift its colour according to the background, making the wearer invisible. It wants to test the best designs in just 18 months.

That doesn't seem far away at all, but actually we're closer to that kind of technology that most people realise. Metamaterials that can bend light around an object were first demonstrated nine years ago, and since then many researchers have showed off their own invisibility cloak designs.

The tricky thing is that their invisibility usually only works in the lab from certain angles or at specific wavelengths. The US Army has specified that prototype uniforms submitted must work from all angles, in all terrain and weather conditions, across a wide range of temperatures.

That's a big ask, especially as any power source required by the camouflage must last at least eight hours and weigh less than half a kilo. But some firms have said that they're on their way there - most notably a Canadian company called Hyperstealth Biotechnology, which supposedly demonstrated camouflage technology to the US army in 2014.

At the end of the day, any technology that gets commissioned isn't going to allow for total invisibility - instead it'll be compared with existing camouflage patterns. Any improvement will almost certainly be grounds for research to continue.








07 May 17:21

Meet the Rombertik, the malware that self-destructs when discovered

by Desire Athow
Meet the Rombertik, the malware that self-destructs when discovered

Researchers at the Cisco-owned Talos Security Intelligence and Research Group have discovered a piece of malware, named Rombertik, that can best be described as exhibiting kleptomaniac and suicidal behaviours.

Ben Baker and Alex Chiu, who were part of the team that unearthed Rombertik, found out that this malware lives on the victim's browser, in a way similar to a parasite, and exfiltrates login details and other sensitive piece of information to an external server.

The data capture occurs at the source itself, i.e. as the target enters it in the browser, before the information is encrypted and sent over HTTPS.

While its propagation methods remains remarkably simple, relying mostly on social engineering and the gullibility of the weakest link in the system (i.e. human beings), it is what follows afterwards that makes it interesting.

"If the sample detected that it was being analysed or debugged it would ultimately destroy the master boot record [MBR]" wrote the pair.

A nasty piece of bits

Doing so just makes your computer unusable forcing you to reinstall your operating system altogether. If the malware doesn't have the permission do that, it will encrypt the home folder and restart your computer.

That is not a new behaviour remarked David Emm, Principal Security Researcher at Kaspersky Lab. ""Trashing sections of a hard disk, or corrupting data was quite common in the 1990s - that was a time when the threat landscape was dominated by cyber-vandalism e.g. Michelangelo, Dark Avenger, Maltese Amoeba, Chernobyl. Equally, it's an approach that has been employed more recently by 'wipers' (e.g. Shamoon) to sabotage infected systems. Likewise, encrypting data, while used in 'old school' attacks for mischief (e.g. One-half) has become a key feature in today's ransomware programs."

Making sure that your antivirus software is installed and up to date, that you don't click on suspicious attachments and block certain types will go a long way to protect yourself.

Source: Cisco's Talos








06 May 16:26

ASRock Rack Announces Mini-ITX LGA2011-3 Motherboard, with Quad-Channel SO-DIMM DDR4

by Ian Cutress

Hot on the heels of the recent announcement of a Haswell-E based mini-ITX motherboard from the main motherboard division of ASRock, the ASRock X99E-ITX, the server division has decided to release one of their own. One of the main complaints about the mainstream model was the reduction to dual channel DDR4 memory in order to fit everything on the board – however the EPC612D4I from ASRock Rack moves to SO-DIMM modules and elegantly fits one DIMM per channel within a 17cm x 17cm footprint for quad channel support.

The motherboard uses the narrow ILM socket, usually found in the server space, similarly to the X99E-ITX. We also get a single PCIe 3.0 x16 slot, four SATA 6 Gbps ports, an integrated server control ASPEED 2400 module for headless running with KVM support and dual Intel gigabit Ethernet (I210 + I217). There is also an on-board USB 3.0 type-A connector for in-chassis USB devices such as dongle licences, and the rear panel is almost at the thin-mini-ITX standard for z-height. Notice that there is no onboard audio due to the space limitations.

Sources of DDR4 SO-DIMM modules, especially ECC ones for servers, are relatively few right now. But because this is a server motherboard, chances are that the board is mostly available through B2B channels, athough we might see some consumer outlets start selling it later in the year similar to the C2750D4I. Also by virtue of server focused sales, chances are that the EPC612D4I will not come with a bundled narrow ILM cooler (like the X99E-ITX does) and users will have to source their own. The pricing reflects this, as we see that the price is reported to be at $265, making it a very cheap LGA2011-3 motherboard especially when we factor in Xeon and ECC support.

Much like the X99E-ITX, ASRock is targeting the small form factor CPU compute markets rather than large GPU compute systems. We have the X99E-ITX in for review so stay tuned for that within the next few weeks.

Source: ASRock Rack (product page)

06 May 16:18

NVIDIA Plans To Wind Down Icera Modem Operations In 2016

by Brandon Chester

Today NVIDIA announced plans to wind down their Icera modem operations in the latter half of their fiscal 2016. Icera was originally an independent British semiconductor company before they were purchased by NVIDIA in 2011 for $367 million dollars. Their operations primarily focused on software and hardware design for wireless modems, with a strong focus on the software side. Their main product offering was their line of software modems (often shortened to softmodem) for use in cellular hotspots, computers, and mobile devices.

All modems function through a combination of hardware and software. However, at the time of NVIDIA's purchase, Icera's solution was significantly more software based than Qualcomm's. Since Qualcomm was really the only big name in modems at the time, NVIDIA's purchase of Icera made sense in order to ship future Tegra chips without having to rely on external basebands.

Unfortunately, NVIDIA's efforts to ship SoCs with integrated modems in the mobile space haven't worked out as well as planned. While there have been some past design wins for discrete Icera basebands such as in the ZTE Mimosa X, NVIDIA wasn't able to drive adoption of their Tegra 4i SoC with its integrated Icera i500 baseband. As a result, Tegra chips since that time have almost entirely relied on external modems from Qualcomm or other manufacturers, with the discrete i500 only being used a handful of times.

Since then NVIDIA has shifted their focus away from mainstream cell phones and tablets and towards more niche products – the company’s press release specifically calls out gaming, automotive and cloud computing – so the company has not needed an in-house baseband solution as urgently as they once did. Meanwhile for NVIDIA’s immediate future, they expect their existing Icera basebands to meet their needs for the next year (or more), and longer term NVIDIA expects to partner with 3rd party baseband suppliers in a fashion similar to what their customers are already doing today.

NVIDIA's press release states that Icera currently employs around 500 employees, which are mostly located in the United Kingdom and France. No longer having a need for Icera themselves, they are open to a sale of Icera's technology or the company itself. It's unlikely that they'll be able to sell the company for anything near the original $367 million dollars that they paid, as many different companies have begun to offer their own softmodem products in the years since then.

06 May 00:07

Review: Updated: Chromecast

by James Rivington and Nick Broughall
Review: Updated: Chromecast

Introduction

Update: Chromecast was the #1 selling streaming device in 2014, with more than 10 million units sold worldwide according to sales group NPD. With the arrival of video streaming services Netflix and Stan in Australia, it's an essential addition to any home theatre setup. Original Chromecast review written in August 2013.

Streaming media around your home isn't a new idea, but the rise of internet-based on-demand services like Stan, ABC iView and Foxtel's Presto means that it's becoming more and more popular as a way to watch TV.

Chromecast is an inexpensive media streaming adapter that turns any HDMI-equipped television into an app-driven smart TV when paired with a phone, tablet or computer.

The total package is Google's answer to the likes of Apple TV – devices designed to take your digital content and get it up on your big screen without fuss or fiddle.

It's a very simple setup that Google's offering: the Chromecast is a small HDMI dongle that plugs directly into your TV, and connects to your Wi-Fi network.

From there, you choose what to watch on another device and then Chromecast streams that content directly and displays it on your TV. But that means unlike the competition, it has no real interface of its own. It just sits and waits for your other devices to tell it what to stream.

On its launch in the middle of last year in the US, the Chromecast wowed with its cheap price, and it's similarly affordable in Australia: just $49 - just under half of the Apple TV's RRP. That's awesome, and you can get it for a bit cheaper too if you shop around.

chromecast uk

On launch, the services that tied in with Chromecast were limited and very much a work in progress. But 18 months on, the list of compatible apps is decent, and continuing to grow.

In the US, the Chromecast app list is includes Netflix, YouTube, Hulu Plus, Pandora and HBO Go - most of the big hitters.

In Australia, many of the big hitting streaming apps are now available: Netflix, Presto, Stan, Quickflix, are all featured apps, as is Ezyflix.TV and ABC iView on both iOS and Android.

Then of course there's Google's YouTube app and Google Play Movies and Music.

There are a few gaps still to fill. The regular catch up TV services, like TenPlay and Yahoo!7 Plus!7 are still behind, but there's enough supported apps to make the Chromecast legitimately useful for Australians.

chromecast uk

Things are slightly more useful on the music front, with Pandora, Deezer and Google Play Music apps supporting the service down under. There's no Spotify or Rdio yet though, which could be a dealbreaker for some users.

The key addition here would be Spotify, but there's a wide range of services that could become available. The truth is that the 'casting' method of getting video onto your TV is simply not very compatible with music.

We're sure that this will change (Google can be very persuasive…), but there are major limitations on built-in support at the moment.

Google's Chrome browser on Mac or PC lets you mirror a browser tab to your TV, meaning that you can theoretically send any of these services to your TV that way, but there are issues with this.

chromecast uk

If you stream video via the compatible apps, the content is streamed directly from the internet to Chromecast without travelling via your phone or tablet.

But if you mirror a browser tab, you're effectively streaming video to your computer from the internet via your router, then streaming it back to your router and then streaming it out once more to your TV, resulting in heavy network traffic.

If your network is flaky, this will result in choppy performance. We tested on a variety of networks and found results were variable but a clear network should be able to handle it. Browser streaming options are Extreme (720 high bitrate), High (720p) and Standard (480p).

Using Chromecast, though, it's clear that it's designed for use with portable devices in mind. You won't find a Chromecast remote in the box or a main menu tying all of the apps together like on an Apple TV or Roku 3.

Chromecast rivals

All streams originate from a special "Cast" button that's built into each compatible mobile app - your device is the remote.

Currently, Windows Phone 8 users, who don't have many options among app-filled streaming technology, are totally out in the cold for Chromecast support, with only third party YouTube app TubeCast being compatible. So Chromecast is only really suited to those with Android and iOS devices.

Despite these limitations from the big names, Chromecast has extra potential to it thanks to developers getting creative with the Google Cast SDK.

Apps like AllCast enable Android - and, starting in January, Apple iOS - users to display video and photos through the Chromecast, for example. Which is useful since Google somehow neglected to add this feature to Android.

The Plex app will happily stream to Chromecast, which is perhaps the best way to get your library of downloaded videos up on the big screen. Photo Caster is a free app that enables iOS users to put their photos on-screen similarly.

Media Browser is an app for iOS and Android that streams media content stored on any computer in the house, giving the Chromecast home theater PC (HTPC) granting capabilities. You can even stream Podcasts from a few apps.

The way Apple integrated its AirPlay streaming solution deep into iOS means that it inevitably has better support when combining an iOS device with an Apple TV, but as long as developers continue to use the Cast SDK, we will inevitably see almost as widespread support for cross-platform solutions - iOS, Android and Chromecast all working together seamlessly.

Roku has its Roku SDK, but it's easier for many developers to make their apps compatible with the Android-based Chromecast. It requires a little bit of retooling rather than learning an entirely new ecosystem, as is the case with the Roku.

Between its universal nature, rock-bottom price and ease of setup, Chromecast is massively tempting as a little TV add-on.

Design and interface

Chromecast is so small it could easily be mistaken for an oversized USB thumb drive with a little more heft to it.

That contrasts with Apple TV or Roku, which sit as separate boxes, and often have extra outputs, such as for audio. Nice if you want them, but not necessary for most people. Chromecast just is one big HDMI plug, with the exception of its power cable.

It's a simple black unit, likely to blend into the colour of most TVs (though if it ends up sticking out of the side noticeable, it'll be somewhat more intrusive).

If your TV is wall-mounted, Chromecast could be really awkward to implement depending on your TV's HDMI port positions. It might also not fit into the space for HDMI ports in some setups, so Google has included a short HDMI extender cable.

chromecast uk plugged in

It gets around immediate problems, but be aware that at 72 x 35 x 12 mm, Chromecast might be an awkward fit.

Powering Chromecast can be a little more complicated depending on the age of your television, because it can't run on the power from the HDMI port alone.

chromecast power cable

On the rear of the unit is a micro-USB port, and Google has included a five-foot micro-USB-to-USB cable in the Chromecast box along with a power adapter that plugs into the mains.

Ideally, you wouldn't use the latter of these - if your TV has a USB port on it, as many Smart TVs do, you can use that as the power source.

Otherwise, you'll have to plug in, and it's possible you'll find that five feet isn't enough. Again, it's not a big problem, but it's worth being aware of.

Once it's plugged in and ready, Chromecast's software setup is not complicated at all - in fact, it's barely existent.

set up

It's a matter of visiting Google's Chromecast "getting started" website on a laptop, tablet or smartphone, downloading and installing some software, and connecting the device to your home WiFi network.

Google has provided apps for each platform that take care of this for you, with the Android version especially good – it does almost everything for you.

set up chromecast

On iOS, for example, you need to switch to a Wi-Fi network created by the Chromecast so that you can then tell it how to join your main Wi-Fi network – it's easy enough, but requires a bit of annoying back-and-forth to the settings.

On Android, it's all handled for you invisibly – you don't need to manually change Wi-Fi at all. It's impressive.

success


Once you've set up, Chromecast is basically ready to go. It's always on, so when you want to use it, just switch the HDMI input you've got it on and fire media at it. When there's nothing playing, you generally just get a screensaver, showing pretty photography.

Content and performance

Chromecast finally gives Android owners an official media-relay option that broadcasts content from their smartphones and tablets to a TV in similar way to Apple's AirPlay technology. That it works across Android, iOS, PC and Mac is even better.

There's a "Cast" button that's usually built into the top right of all of the compatible mobile apps - it looks like a rectangle with broadcasting bars (like a Wi-Fi symbol) in the corner.

cast

It's easy to recognise, despite Android being full of similar rectangles these days (and Apple's AirPlay symbol being somewhat similar).

Tap the "Cast" button and you'll be offered a list of Chromecast devices on your network, so just choose which one and you'll see the screen spring into life, loading the video or audio content you selected.

The app then becomes a remote, basically, enabling you to control the screen. Some apps even give you a custom screen on the device, ditching playing the video there as well in favour of big buttons for controlling playback easily.

cast

Video quality is great from most online sources - YouTube in HD looks as good as you'd hope. We did encounter some freezing issues towards the end of a video, but pausing and restarting got us back in action.

stewart lee

Most of the time, when you send video over to the Chromecast, it picks up quickly. Generally, once you've started playing a video, you can exit the app and start doing other things - and this applies both to mobile and desktop.

This means Chromecast can act as a second screen in a couple of cases, particularly when used with desktops, letting you browse the internet or do something else on the computer while a tab with your Gmail, Twitter feed or a video is running in a "casted" background on the big screen.

netflix

However, it's not suitable as a second screen option for work; it's just a mirror of something you've already got open, so if you want to interact with it, it needs to be done on your main computers screen. There's also noticeable two-second lag between the computer and the TV.

That's a similar amount of consistent lag experienced when using Apple AirPlay mirroring on a Mac computer, but it is at least consistent - it doesn't suddenly hang or get choppy, for the most part.
But here's where AirPlay mirroring from a Mac shows up Chromecast's limitations: the browser extension is just that - browser-based.

youtube

Showing off a photo you retouched in Photoshop, a document you want a group to proofread in Word, or any program outside of the Chrome browser tab you're casting requires diving deeper into an options menu for a hidden command that's deemed "experimental" by Google.

It's buggy and slower than Apple TV's full computer mirroring. Chromecast is not meant for mirroring system-wide applications - at least not yet.

App limitations

The Chromecast tab extension is also limited to Chrome at the moment and may never work outside of the Google-owned browser. That means FireFox, Internet Explorer, Safari and Opera users are out of luck if they had hoped to "Cast" using their favorite browser.

While Google has opened up the Chromecast API for building applications, it's still forcing everyone to use Chrome for tab casting from a computer.

Chromecast HDMI media streaming device

Chromecast also doesn't require an entire Apple TV device to sit in your entertainment console or force you to wire up with a lengthy power, HDMI and optional Ethernet cable.

Its plug-and-play nature means that it can be transported much more easily and fit into a backpack to carry to a school presentation or business meeting (we used Apple's iCloud version of Keynote to show a presentation from a Chrome tab), hotel room, or friend's TV. That flexibility may be worth its inexpensive price alone.

There is one potential of inflexibility, though. The total lack of built-in apps or direct method of control (such as a dedicated remote) means that if your device runs out of battery or is in another room, you can't play anything.

We had a few little niggles, such as a photo-casting app that forgot the Chromecast after each use and had to hard quit and restarted.

Most of these are probably up to developers to fix, though, and as things mature, we expect they'll improve.

Verdict

Google's take on an AirPlay like universal streaming service is hugely impressive, not just for its price, but because it works well, on many platforms.

It's possibly the perfect companion device to an older TV with no smart features, and it's certainly the cheapest way to get a service like Stan or Netflix up and running in your living room if you can't plug in any other compatible devices.

We hope that developers will also be able to work in some other interesting Chromecast options for the future, provided the technology allows it - something equivalent to the big-screen gaming that AirPlay enables for Apple users would be great to have cross-platform, for example.

That said, this option is now being covered in Google's recently released Nexus Player, so don't expect Google Chromecast to get gaming features any time soon.

We liked

Chromecast is brilliantly easy to set up, gave us little hassle after the first few minutes, and then worked really well.

You quickly become accustomed to the idea of just reaching for your phone and beaming something over to it - just like the future should be.

The streaming quality is generally very good, and it's quite reliable, especially for a product that's still fairly early in its life. It's also impressive that it works so well on all platforms.

At $49, we definitely like the price. It's well into impulse-buy territory, and we do think it's worth picking one up just for its potential, even if you already have a way to stream the services it currently works with.

We disliked

The lack of apps was the simple, major problem when Chromecast originally launched. It was fairly well-supported in the US, but elsewhere it needed fleshing out... and it has been.

The only thing you might dislike about Chromecast is its reliance on the phone in your pocket or the tablet on your coffee table. But for anyone who keeps those things with them, it's not a problem and in fact for many it's a more convenient way of arranging things.

There are a few other things we'd like to see, such as a bit more polish to improve things further – faster loading of videos in some apps, broader support in basic Android apps, stronger streaming from desktops, for example.

Verdict

Chromecast is an inexpensive, easy-to-use way of accessing streaming on your TV. Many of the major video streaming apps are now compatible and you soon find yourself using it on a daily basis.

If it cost more, we'd say it might be worth looking at the Apple TV or the Nexus Player, but at this price how can you possibly argue? Chromecast is fun enough, robust enough and has enough potential that thoroughly recommend it to anyone without a smart TV.








05 May 23:33

How To Use Licensed Music in Your YouTube Videos

by Paul Lilly

YouTube CopyrightFollow these steps to avoid having licensed music removed from your YouTube videos

If you've ever tried to legally inject a copyrighted song into your YouTube video (meaning you licensed it or otherwise own the song in question), you know it can be an exercise in frustration. It's not that using music is difficult—the problem stems from YouTube's sometimes overzealous efforts to keep everything on the up and up. Even though you have permission to use an audio track, or own the copyrighted song, YouTube may take down your video, essentially claiming you're a copyright crook. Have it happen enough times and you might even find yourself banned.

Part of the issue has to do with Content ID, which is a system YouTube uses to look for copyrighted music. It automatically scans every video that gets uploaded and sniffs out potential copyright violations, but one of the problems with Content ID is that it only identifies the song, not the license. Alternately, someone can file a copyright claim on music contained in your video and, well, it can be a hassle.

Well folks, the good news is there's a relatively easy way avoid the headache of having your video removed from YouTube over false copyright concerns. Here's how.

Obtain Permission

The first thing you need to do is make sure you have permission to use the copyrighted song you want in your video. Depending on the song, this can be relatively easy or a bit of pain in the backside, and potentially time consuming. Be that as it may, it's necessary if you're not trying to skirt the law.

You can request permission from major music labels by using the following links:

For other labels and music rights holders, you'll need to do some digging on Google, Bing, DuckDuckGo, or whatever happens to be your search engine of choice.

Or Use YouTube's Catalog of Music

If you don't want to go out and license music on your own, you can swap out audio tracks on your video using YouTube's audio swapping tool. Here's the thing—YouTube possesses over 150,000 tracks that are free to use, and since they've all been pre-approved, you don't have to worry about a copyright claim yanking your video offline.

YouTube Audio
YouTube provides plenty of free music to add to your videos.

You can find step-by-step instructions by clicking here, though if your click finger is sore, here's what you do:

  1. Go to your Video Manager page, and click the arrow next to Edit on the video you’d like to edit
  2. In the drop-down menu, click the Audio button
  3. Select a track by clicking on it and preview your video with the music track added
  4. Search for a track using the search box, or browse by genre using the Featured Tracks tab
  5. Once you've found a track you like, click the Save button, and the new audio will be applied to your video. Audio tracks can be positioned in a precise part of your video using the Position Audio feature.

That's it, you're finished, there's nothing more to see here. For the rest of you, move on to the next step.

Be Audacious and Grab Audacity

If you routinely work with music files, then you've probably heard of Audacity. And if not, well, here's your chance to get acquainted with the program. It's free, open-source, and works on multiple platforms, including Windows, Mac, and GNU/Linux. You can grab Audacity here.

What we're going to do with Audacity is ever-so-slightly alter the speed and pitch of the copyrighted song that you licensed. By doing this, it throws YouTube off the scent and greatly reduces the risk that your video will be taken down.

Once you've installed Audacity, fire it up and load your song inyo the program. You can either drag-and-drop the music file into Audacity, or load it the old-school way: File > Open and select your song.

With your song loaded into Audacity, go ahead and highlight the length of the track. Since there's no "Select All" option, the easiest way to do this is to click the gray area on the left-hand side. Alternately, you can scroll until you reach the end of the track, click in the gray area, and drag left until you've highlighted the entire song.

Audacity Change Speed
Don't go overboard or your altered track will sound noticeably different from the original.

After you've selected the length of the song, it's time to alter the speed. Click on Effect > Change Speed. A pop-up box will appear giving you the ability to speed up or slow down a track, either by dragging the slider or entering a percentage. You don't want to go crazy here, or the song will sound funky, but at the same time, a slight change won't be enough to duck YouTube's filter.

We recommend changing the speed by 0.3 percent—it doesn't matter if you speed it up or slow it down. It will take anywhere from a few seconds to a few minutes to process, depending on the length of the song and your PC. When it's finished, give the altered track a listen. With such a small change in speed, only the most discerning ears should be able to notice a difference.

Locate Lame
You'll need Lame to save to MP3 files with Audacity.

If you're happy with the result, go to File > Export Audio and save the track to WAV or MP3. If you choose MP3, you may get a message letting you know that Audacity needs the file lame_enc.dll. Don't sweat it—just hit the Download button (or click here) and follow the instructions.

Pitch Imperfect

If changing the speed doesn't do the trick and/or you want to err on the side of caution, you can also alter the pitch. The steps are mostly the same—load up your music file and highlight the entire track, but this time select "Change Pitch" from the Effect menu.

Audacity Change Pitch
Alter the pitch too much and it will sound like Alvin and the Chipmunks hijacked your track.

Under the Frequency heading, there's a Percent Change box. We suggest changing the pitch anywhere from 3 to 5 percent in either direction (for slower songs or tracks with an emphasis on instrumentals, like bluegrass, you'll probably have better results with negative percent changes). Try it one way and give it a listen. If you don't like the result, go to Edit and select Undo Change Pitch. Try again in the other direction and/or alter the percentage.

Upload and Enjoy!

All that's left now is to upload your slightly altered, licensed music track to your YouTube video. It's still possible that a person could manually report your video, so you may want to leave a note letting viewers know that you licensed the copyrighted song(s) in your video.

NOTE: Maximum PC does not condone the stealing of copyrighted material, so we do suggest that you ask for permission whenever possible. 

Follow Paul on Google+, Twitter, and Facebook

05 May 20:54

Microsoft improves hybrid management across clouds

by Jamie Hinks
Microsoft improves hybrid management across clouds

Amidst all the noise about Windows 10, Microsoft has announced an Operations Management Suite (OMS) that allows enterprises to keep a handle on servers across cloud and bricks-and-mortar data centres.

First reported by Venture Beat, the product allows companies to simplify assets held in a hybrid setup and take control of any physical, virtual or container instance across any cloud whether it be inside an internal data centre, or on Azure, AWS, Windows Server, VMware or OpenStack.

"The suite enables the management of any instance in any cloud," Mike Schutz, general manager of product marketing at Microsoft's Server and Tools division, told VentureBeat in an interview.

OMS's main jobs are to track and manage log analytics, security, availability, and automation to bring to market an offering that costs a lot less than comparable products, according to Microsoft.

Much more coming soon

"I think from a broad-based standpoint, we feel that in customers' journey to the cloud, hybrid is really what makes our approach unique, and so all the investments we're making are to make our customers data centers and the servers that they run be the edge of our cloud. It blurs the line between private cloud and public cloud," Schutz added.

OMS is out now and Microsoft added that it will bring even more capabilities at a "rapid pace" and in the next 12 months cloud-based patching, inventory, alerting, container management and a lot else besides will become part of the offering.








05 May 20:46

Mushkin Expands Reactor Series SSD Capacity Options

by Paul Lilly

Mushkin ReactorNew capacities and a new contest

Mushkin seems awfully excited about the expansion of its Reactor Series solid state drive line, which in addition to its existing 1TB capacity option, will soon be made available in 256GB and 512GB capacities. To celebrate the new SKUs and perhaps earn some street cred in the crowded SSD market, Mushkin is also holding a contest where the main prize is a Mushkin Extreme Gaming PC.

First let's talk about the drives. The Reactor SSD family uses Silicon Motion's SM2246EN controller and is equipped with a SATA 6Gbps interface. Rated sequential read and write speeds are up to 560MB/s and 460MB/s, respectively, along with up to 74,000 IOPS for random 4K reads and up to 76,000 IOPS for random 4K writes. Note that those are the performance figures for the 1TB capacity -- Mushkin hasn't said whether the 256GB and 512GB drives will boast the same metrics.

As far as the 1TB drive goes, it's reasonably priced on Newegg at $340, or 34 cents per gigabyte. Pricing for the lower capacity drives hasn't yet been announced.

Now, about that contest.

"To celebrate the expansion of our very popular Reactor SSD starting May 1, 2015, gaming fans can enter our Ghost Recon Phantoms Sweepstakes for a chance to win up a custom built Mushkin Extreme Gaming PC and much more," said Nicolas Villalobos, Director of Global Marketing at Mushkin Inc. "As part of the celebration fans who purchase select Mushkin products through Newegg.com will also receive a Ghost Recon Phantoms $30 Value game code – free"

There are several prizes up for grabs, including various gaming gear from Razer and DXRACER. You can read the official rules here and enter for a chance to win here.

Follow Paul on Google+, Twitter, and Facebook

04 May 16:19

SolidRun CuBox-i is a 2-inch Cube with Quad-Core ARM SoC, 4GB of RAM

by Pulkit Chandna

Cubox-1 4x4Will come pre-installed with Android and Kodi home-theater software

Around two years back, Israeli company SolidRun launched CuBox-i, a line of ARM-powered 2-inch cube PCs with a starting price of $45. It’s now upping the ante on that front with a device it says is the “smallest ARM quad core 4GB mini computer” out there.

The new CuBox-i 4X4 marries a quad-core i.MX6 chip from Freescale with 4GB of DDR3 RAM. While the particular quad-core SoC (system-on-chip) isn’t new to the range, the most amount of RAM seen in the current crop of CuBox-1 devices is 2GB. Other specs include two USB 2.0 ports, Ethernet, MicroUSB port, eSATA, WiFi 802.11b/g/n, Bluetooth 4.0, Ethernet, optical audio out, and IR. It will come with Android 4.4 KitKat and Kodi (formerly XBMC) pre-installed, though you’ll be able to install your favorite flavor of Linux should you want to.

"The CuBox-i 4X4 module addresses the growing need for increased high capability power efficient fan-less mini-computers," Dr. Atai Ziv, CEO of SolidRun, was quoted as saying in a press release. "Its unique nonmoving parts and passive cooling design increase reliability in various home, city and industrial floor environments targeting multimedia & digital signage, industrial and IoT applications."

Quite clearly the CuBox-i 4x4 packs quite a serious punch for such a tiny device (although nowhere near as much as this bad boy), but how much will it cost? The company’s website has the device listed for pre-order for $169.99, with shipping expected to begin in late May.

Follow Pulkit on Google+

23 Apr 17:01

In Depth: Taking Mini's augmented reality glasses for a test drive

by Lily Prasuethsut
In Depth: Taking Mini's augmented reality glasses for a test drive

In a surprising move, Mini has teamed up with Qualcomm to create augmented reality glasses, showing that even though Google has gone back to the drawing board with Google Glass, augmented reality isn't out for the count.

Dubbed Mini Augmented Vision, the specs are a working prototype with no pricing or release date attached for consumers.

They were created to blend fashion with the functionality of driving using augmented reality assistance. I tried on a pair during an event in San Francisco, and took a spin with them on a driving simulation to see how they really work.

Riding in the car with AR

I was told during my demo that since the glasses were still a work in progress, I had to "wear them like a hat," meaning the arms of the glasses had to go over my head instead of directly on my face.

Mini Augmented Vision

Certain calibrations were also needed before the real demo began so the glasses would work properly with my eyes. An insert for prescription lenses was also available for people who need them, which was what I ended up with. After the initial awkward method of putting them on, I found the futuristic glasses to be fairly comfortable and lightweight despite their bulky frame.

Though cartoonish and Elvis-like, Mini Augmented Vision isn't any sillier looking than other AR glasses out there. They could even pass for over-sized aviator sunglasses. Robert Richter, BMW senior technology engineer for the project, told me that if a second generation is developed, the glasses will take on a slimmer design.

Mini Augmented Vision

With the outside designed largely by the folks at BMW [which owns Mini], the innards were taken care of by Qualcomm. The Mini AV glasses are packed with 2GB of RAM and a Qualcomm 805 Snapdragon processor running off Android 4.4 and Vuforia, Qualcomm's mobile augmented reality software platform that was released last year.

Mini Augmented Vision

Using Wi-Fi, Bluetooth and GPS sensors and a 5MP camera housed in the front of the glasses, a head-up display overlays information stereoscopically on a 720p (1280 x 720p), 16x9 screen, with the field of view reaching only 28 degrees.

The glasses hook up to the Mini's connected car system and your phone to receive information via the sensors, which is then feed directly to your eyeballs while driving. You can have directions, vehicle speed and even real-time street sign information displayed in front of you.

Mini Augmented Vision

Directional arrows also pop up in front of you during what Mini and Qualcomm call "the first mile" and "the last mile," ensuring you reach your destination without issue. Essentially, a navigational display pops up from your current location to the vehicle or from the vehicle to the final destination.

Aside from the HUD, Mini was keen to show off the glass' "X-Ray Vision" feature. By syncing up to the car's external cameras, you're able to see through the car from the glasses in case you need to check on an obstruction or maneuver into a parking spot.

Mini Augmented Vision

"Safety is our primary goal," Richter said when I asked how distracting AR would be coupled with driving. The companies created minimalistic icons to make sure that information is presented at the opportune moment. During my demo, an engineer also noted how the HUD shows snippets of info centered in the display so "you can still have your eyes on the road in front of you while quickly scanning the info."

The future is unclear

According to Richter, the continued development of these AR glasses hangs on public reception. If there is general interest, it's likely Mini and Qualcomm will pursue a second version. He noted the companies have already learned much about designing and that a sleeker device could definitely be made.

Mini Augmented Vision

"BMW is interested in keeping the Mini connected car platform open to third parties," Richter said. So if we don't see a second-gen of glasses, it will remain a stepping stone for AR integration with Mini Coopers, showing that AR will continue to be an area of focus for the car company.

Jay Wright, Qualcomm vice president of Vuforia, reiterated this notion: "The heads up display for your life, starts as a heads up display for your car."

If Mini and Qualcomm can make Augmented Vision something people will actually want to wear while maintaining decent battery life and a good display, then Wright is on to something.

Mini AV doesn't have the capacity of the Microsoft HoloLens but it seems more practical than Google Glass. My time with the Mini glasses was short, but so far, the progress the companies have made with augmented reality is impressive and feels like a step in the right direction.








22 Apr 17:22

Review: Dyconn’s i6 Power Case packs a 3,200 mAh battery and features wireless charging (Video)

by Dom Esposito

m9review Copy 02.00_02_36_05.Still120

There are absolutely no shortage of battery case offerings for iPhone 6. Even though we’ve covered the best iPhone battery cases here, it’s sometimes overwhelming sorting through them all. The real winners are the ones with maximum battery capacity and some extra features to spice things up. Recently, I came across Dyconn’s i6 Power Case and was pleasantly surprised to find that it fits most of what I was looking for, but is it enough? Well, not only does this case pack a 3,200 mAh battery inside, it also features wireless charging. It’s not a must-have feature, but I sure do appreciate the convenience…

When I first received the case, the packaging wasn’t too flashy (a bit basic in my opinion), but I was more interested in the features/functionality. With 3,200 mAh, you can almost charge up an iPhone 6 twice. That was definitely appealing. On top of that, being able to recharge the whole package wirelessly means that I don’t need to shuffle around for my cable.

Inside of the box, you’ll find a Micro USB charging cable, headphone jack adapter, the case’s dock (for charging), and the i6 Power Case. It’s a pretty simple setup overall. There are pins on the bottom of the case that correspond to small plates on the dock. When these points connect, the case/iPhone begin to charge. It’s not Qi charging, but for the ease of use the conductive wireless charging will do just fine. Watch the video review for a full demo of this feature.

Check out our video review below:

The dock features its own fixed USB cable that can be plugged into your iPhone’s wall adapter for power. If you’re not in the mood for wireless charging, the Micro USB cable can be used with the port on the bottom for charging/sycing, but the iPhone will not sync through the wireless charging dock if plugged into a computer for power. Also, it’s important to note that the 3.5mm jack adapter may be necessary for certain types of headphones. Earpods fit within the recessed headphone jack just fine, but for anything wider you’ll need to keep track of that adapter.

Screen Shot 2015-04-22 at 7.51.38 AM

LED power indicator lights.

The i6 Power Case is made from Polycarbonate and TPU with full protection for the iPhone. The side walls of the case are flexible (TPU) and allow you to slide in the iPhone without having to worry about disassembling anything. It provides a very secure fit and the rubber coating on the outside of the case feels good in the hand. If there’s one thing I’m not a fan of here, it’s going to be the rubbery matte finish. It feels great, but picks up prints and oils from your fingers very easily, thought that’s no different from most other battery cases. The i6 Power Case is surprisingly thin as well, only adding 1/4″ to the overall bulk of the device and about a half inch in height.

Screen Shot 2015-04-22 at 7.52.43 AM

The full setup.

The lock/volume buttons are covered, but the case provides openings for the camera/flash and mute switch. On the backside of the case, there’s a small power button that will illuminate LED power indicators above it when pressed. If you hold down the button for two seconds it will begin charging the iPhone, an additional two second press will cut the power. As far as recharging goes, the instructions notes that it will take roughly 3.5 hours to fully charge the iPhone with its case and we found that to be accurate.

Screen Shot 2015-04-22 at 7.52.08 AM

Front openings for the speaker/microphone.

Overall, you can’t go wrong with this battery case. It’s simple, provides a good amount of extra battery life, and features wireless charging. You also have the ability to purchase additional docks ($24.99) if you’d like to have more than one around the house or a spare for the office. If you’re interested in the whole package, you can pick up the MFI-certified Dyconn i6 Power Case (dock included) for $89.99. When compared to other battery cases on the market, it seems like a pretty good deal.


Filed under: Reviews Tagged: battery case, Dyconn, iPhone 6, review, video, wireless charging

For more news on iPhone 6, Reviews, and video continue reading at 9to5Mac.

What do you think? Discuss "Review: Dyconn’s i6 Power Case packs a 3,200 mAh battery and features wireless charging (Video)" with our community.

22 Apr 17:17

AVADirect Announces VR Desktop PCs for Sixense Stem System

by Paul Lilly

AVA SixenseA real desktop for virtual gaming

You can feel that we're on the verge of a virtual gaming revolution, or so several companies heavily invested in VR technology hope. We're right there with them -- not from an investment standpoint, but from the perception of a fan, as we've played around with some awesome VR demos. Looking to give the category a nudge, Ohio-based boutique builder AVADirect today announced a pair of desktops designed in collaboration with Sixense for the latter's Stem System.

Our own Jimmy Thang spent some hands-on time with the Sixense Stem VR controller several months ago, a wireless devices that tracks movement with barely any latency. The system as a whole offers a whole body presence in virtual worlds, and in one of the demos Thang got to experience, he was able to wield a lightsaber noting that "it felt incredibly immersive." You can read more of his experience here.

Getting back to AVADirect's custom desktops, both offerings are decently spec'd machines that promise 1080p gaming at a minimum of 75fps. The first is the AVA Sixense VR Desktop priced at an even $1,800 (kudos to AVADirect for omitting the 99 nonsense). For that you get an Intel Core i5 4460 quad-core CPU nestled into an EVGA Z97 Stinger WiFi motherboard, 8GB of Kingston HyperX Fury DDR3-1866 RAM, GeForce GTX 980 SuperClocked ACX 2.0 graphics card, 1TB Seagate Barracuda HDD, slim-slot 8X DVD burner, Windows 7 Home Premium 64-bit, and a 3-year limited parts and labor warranty with lifetime tech support.

Several of those components are upgradeable, like adding a 500GB Samsung 850 Evo SSD for $260 or doubling the RAM for $60. Alternatively, you can pony up $2,300 for the AVA Sixense VR Ultimate Desktop, a non-configurable system consisting of an Intel Core i7 4790 quad-core CPU, 16GB of Kingston HyperX Fury RAM, and the aforementioned SSD paired with the same hard drive. Other components are the same, as well.

Note that the Sixense Stem system is not included with either desktop, nor is it available yet. According to the project's Kickstarter page, there was an unexpected setback in production due to failed testing for FCC/CE regulatory standards. There's already been a redesign, and if the new model passes testing this month, units will begin shipping in July.

You can pre-order a Stem System from Sixense's website, and of course the AVADirect systems are available now.

Follow Paul on Google+, Twitter, and Facebook

09 Apr 16:22

Zotac Welcomes Broadwell to the Zbox Nano Family of Mini PCs

by Paul Lilly

Zotac Zbox Nano BroadwellBroadwell kicks off Zotac's M-Series

As one of the more active players in the mini PC market, Zotac today announced the introduction of its new M-Series Zbox mini PC models with processors based on Intel's 14nm Broadwell microarchitecture inside. There are two models to begin with, the Zbox MI522 Nano and Zbox MI542 Nano, both of which measure a super scant 5 inches by 5 inches by 2 inches (127mm by 127mm by 45mm).

Several of today's mini PCs are surprisingly powerful, considering their tiny footprints. And on paper, Zotac's new MI522 and MI542 Nano boxes seem like strong contenders for secondary PCs, or primary systems for people who don't need a ton of computing power.

The only difference between the two systems is the processor -- the MI522 sports an Intel Core i3 5010U processor, a dual-core part clocked at 2.1GHz with Hyper Threading support and 3MB of cache, while the MI542 features a Core i5 5200U, also a dual-core chip, but clocked at 2.2GHz to 2.7GHz. Both lean on Intel HD Graphics 5500 (300MHz to 900MHz) for graphics chores.

Other specs include two DDR3-1600 slots with support for up to 16GB of RAM, 2.5-inch SATA HDD/SSD bay, 3-in-1 memory card reader, 802.11ac Wi-Fi and Bluetooth 4.0 connectivity, four USB 3.0 ports, a single USB 2.0 port, GbE LAN port, and HDMI 1.4a and DisplayPort 1.2 outputs. Plus version of each model also come with 4GB of DDR3 memory and 500GB of internal storage (HDD).

One thing to keep in with these systems is that they don't come with an operating system, nor do they have an optical drive bay. That means you'll either need an external optical drive or a USB drive to install an OS. Also, in my experience with Zotac's small box systems, an SSD is a must.

No word yet on price or availability.

Follow Paul on Google+, Twitter, and Facebook

06 Apr 17:09

NVIDIA Announces Quadro M6000 & Quadro VCA (2015)

by Ryan Smith

Earlier this week we took a look at the GeForce GTX Titan X, NVIDIA’s first product to use their new high-end Maxwell GPU, the GM200. Now just 2 days later the company is back again with GM200 and is set to launch it in their new professional graphics counterpart, the Quadro M6000.

Like Titan, 6000 is NVIDIA’s flagship Quadro card and today’s launch sees the new GM200 based Quadro M6000 take its place at the top of the Quadro graphics stack. What makes this launch interesting is that NVIDIA has never launched a flagship Quadro card so close to a flagship GeForce card in this manner. Quadro cards usually launch months down the line, not days. The end result being that professional users are getting much earlier access to NVIDIA’s best hardware.

NVIDIA Quadro Specification Comparison
  M6000 K6000 K5200 6000
CUDA Cores 3072 2880 2304 448
Texture Units 192 240 192 56
ROPs 96 48 32 48
Core Clock N/A 900MHz 650MHz 574MHz
Boost Clock ~1140MHz N/A N/A N/A
Memory Clock 6.6GHz GDDR5 6GHz GDDR5 6GHz GDDR5 3GHz GDDR5
Memory Bus Width 384-bit 384-bit 256-bit 384-bit
VRAM 12GB 12GB 8GB 6GB
FP64 1/32 FP32 1/3 FP32 1/3 FP32 1/2 FP32
TDP 250W 225W 150W 204W
GPU GM200 GK110 GK110 GF110
Architecture Maxwell 2 Kepler Kepler Fermi
Transistor Count 8B 7.1B 7.1B 3B
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 40nm

So just what is Quadro M6000? Packing a fully enabled GPU, this is GM200 at its best. All 3072 CUDA cores are enabled, and with a maximum clockspeed of 1.14GHz the card is capable of pushing 7 TFLOPs of single precision performance. Coupled with the card is GM200’s double-sized ROP clusters, giving M6000 96 ROPs and better than 2x the pixel throughput of the outgoing K6000.

Meanwhile it’s interesting to note that NVIDIA’s GPU Boost technology has finally come to the Quadro lineup via the M6000. The  M6000 supports 10 different boost states, the fastest of which is the 1.14GHz state that gives the card its 7 TFLOPS of performance. As with GeForce and Tesla cards, GPU Boost allows NVIDIA to raise their shipping clockspeeds for better performance without violating the card’s cooling or power delivery restrictions.

Paired with the GM200 is 12GB of GDDR5 memory, which is as much as the K6000 and still the most one can pack on a memory bus of this size. M6000 clocks its memory at 6.6GHz, which is good for 317GB/sec of memory bandwidth. Furthermore, as with past high-end Quadro cards ECC protection is available for the memory (and only the memory, no cache), which trades off some memory bandwidth for better protection against memory errors.

On the overall performance front, Quadro M6000 is expected to offer a significant performance boost over K6000, similar to what we’ve seen on the consumer side with GTX Titan X. Along with the greater clockspeed and the slight increase in the number of CUDA cores, M6000 brings with it the Maxwell 2 family architecture and its efficiency improvements. Actual performance will depend on the application, but 50% or more is possible, especally in exotic scenarios that stress the ROPs. To that end NVIDIA gave Lucasfilm some of the first M6000 cards, and they reported a better than expected performance increase:

To create the most immersive and visually exciting imagery imaginable, Lucasfilm artists and developers need optimal graphics performance and GPU power," said Lutz Latta, Principal Engineer at Lucasfilm. "With the NVIDIA Quadro M6000 GPU, we saw overall gains of 55% in heavy a compute and memory access ray-tracing application using layered shadow maps. This kind of performance boost gives our artists a necessary edge to realize their creative vision.
(Emphasis ours)

Along with Maxwell 2’s architectural efficiency improvements, Maxwell 2 also brings with it a series of feature improvements that make their debut in the Quadro family on the M6000. On the display side, M6000 is the first Quadro capable of driving four 4K displays (previous gen Quadros were limited to two such displays) thanks to the updated display controller. Meanwhile Quadro also gains the latest NVENC video encoder, which though unlikely to be used at this early stage, opens the door up to real-time HEVC encoding on Quadro.

As for the card’s construction and power requirements, both have changed compared to K6000. M6000’s TDP is 250W, up from 225W on K6000. The increased TDP allows for higher clockspeeds than the Quadro family’s historically conservative clockspeeds, and is at this point equivalent to the consumer GTX Titan X’s power requirements. Interestingly despite this increase, M6000 only requires 1 8-pin PCIe power connector (located on the far side of the card, as in past Quadro designs); this technically puts the M6000 out of spec on PCIe since 250W is more than what the slot + 8-pin connector can provide (225W). We asked NVIDIA about this, and they have told us that the card is pulling the extra power from the 8-pin connector, and though not officially in spec, the kind of systems expected to house the M6000 are expected to have no problem delivering the extra amperage necessary.

Meanwhile the card’s construction has seen the K6000’s plastic shroud and cooling apparatus replaced with the metal GTX Titan shroud and cooler, similar to the GTX Titan X. This change is largely driven by the power increase, as the GTX Titan cooler is already qualified to handle 250W designs. To set it apart from the GTX Titan X, the M6000 gets a black & green paint job rather than the Titan’s all-black paintjob. Otherwise the change in coolers has no effect on the card’s dimensions, with the card still being a double-slot 10.5” long card, just like the K6000.

Moving on, while M6000 will be a graphics monster, as it’s using the GM200 GPU this means that it will also inherit GM200’s compute capabilities, including the GPU’s highly limited double precision (FP64) performance. On the more recent Quadro 6000 cards, NVIDIA has used GPUs with high FP64 throughput (largely an artifact of also using these GPUs in Tesla compute cards) and left FP64 throughput unrestricted on Quadro cards. This made the Quadro K6000 a sort of jack of all trades, offering NVIDIA’s best pro graphics performance along with their full compute performance.

However GM200 and the Quadro M6000 change that. With Quadro M6000 having a native FP64 rate of 1/32 FP32, M6000 will only have minimal FP64 capabilities. In our GTX Titan X article we discuss the development rationale for this, but NVIDIA has essentially opted to build the best graphics and FP32 compute GPU they can, and not waste space on FP64 resources. Consequently this is the first Quadro 6000 series card in some time to have such poor FP64 performance. However as FP64 compute is not widely used in graphics, this is not something NVIDIA believes will be an issue. In the far more common scenario of FP32 compute (e.g. most ray-tracing engines), M6000 will be far more performant than its predecessors.

Finally, as far as use cases go, NVIDIA is aiming the M6000 at a cross-section of possible markets. There is of course the traditional pro visualization market, the high-end of which is always in need of greater GPU performance, something the M6000 can provide in spades. However the company is also pushing the use of Physically Based Rendering (PBR), a compute-intensive rendering solution that uses far more accurate rendering algorithms to accurately model the physical characteristic of a material, in essence properly capturing how light will interact with that material and reflect off of it rather than using a rough approximation. We’ll have more on PBR a bit later this week when we talk about Quadro developments at GDC.

Wrapping things up, NVIDIA tells us that Quadro M6000 will be available soon in complete systems through the company’s regular OEM partners, and as individual cards via the typical retail channels. As is company for NVIDIA, they have not announced a launch price for the M6000, but we would expect to see it launch at $5000+, as has been the case with past Quadro 6000 series cards.

Quadro VCA (2015)

Meanwhile with the launch of the Quadro M6000, NVIDIA is also using this opportunity to refresh their Iray Visual Computing Appliance (VCA), the company’s high-end network-attached render server. The VCA specializes in very high performance remote rendering jobs, packing in multiple GPUs into a single server box, with further scale-out capabilities to multiple VCA boxes via 10GigE and Infiniband.

Now dubbed the Quadro VCA, this updated VCA packs in 8 of NVIDIA’s high-end Quadro cards. The cards themselves are GM200 based but are technically not M6000 – NVIDIA is quick to note that they have a different BIOS that has them clocked slightly differently – but should perform similar to the aforementioned M6000.  These cards have 12GB per GPU and are fully enabled, giving the entire VCA some 96GB of VRAM and 24,576 CUDA cores.

Driving the Quadro cards will be a pair of 10-core Xeon processors (we don’t have the specific model at this time, but believe it to be from the Xeon E5 V3 family), 256GB of system memory, and 2TB of solid state storage.  Other than the change in processors and the updated Quadro cards, the rest of these specs are identical to the previous generation VCA.

On the software side, the new Quadro VCA runs CentOS 6.6. It will also come with Iray 2015 and Chaos’s V-Ray RT pre-installed to make setup easier, however it should be noted that the VCA does not include the licenses for those software packages and those must be purchased separately.

The Quadro VCA will be available soon through NVIDIA's VCA partners for $50,000.

06 Apr 16:19

Async Shaders Will Allow GPUs to Live Up to Their Full Potential, Says AMD

by Jimmy Thang

Improvements to be enabled via DX12 and Vulkan APIs

Graphics cards are currently “not living up to their full potential,” says AMD, and the company is adamant that the forthcoming DX12 and Vulkan APIs will change that. Specifically, the red team says that these APIs will be able to take advantage of AMD’s asynchronous compute engines (ACE), which are inherent to AMD’s GCN architecture. These asynchronous compute engines will allow future games that support them to accomplish more simultaneous tasks at a time. AMD infers that this is tantamount to hyperthreading for GPUs. 

DX11 amd

This is the traditional GPU pipeline with DX11. 

The logic here is that having multiple GPU queues allows for tasks to be completed much faster, and that users could see better performance out of their GCN graphics cards. On a call with AMD, the company claimed that the traditional GPU pipeline behavior is currently very linear with DirectX11 and that all work must be done in a single queue that is scheduled in a pre-determined order. With DX12, however, tasks like physics, lighting, and post-processing can be divided into different queues and can be scheduled independently. This not only amounts to higher FPS in applications that support asynchronous shaders, but lower latency as well, which is key to having good VR experiences. To analogize the comparison, AMD equated DX11’s current inefficiency to inner-city roads with traffic lights against DX12’s more asynchronous model, which the company equated to a freeway system. In the asynchronous/freeway model, tasks can merge in to fill gaps and aren’t bogged down by red lights, or bottlenecks, in this example.

dx12 amd async

AMD says DX12 will be able to leverage its GPU's asynchronous compute engines to perform more efficiently.

According to AMD, using asynchronous shaders can provide post-processing effects with minimal impact on performance. The company cited its LiquidVR SDK demo, which rendered a scene that had an average FPS at 245 with async shaders and post-processing turned off. With the post-processing effect on, however, the FPS took a huge hit and dropped to 158. With async shaders turned on, the average FPS jumped back up to 230, which is just ever slightly more taxing than the scene with post-processing turned off. According to AMD, async shaders have the potential to save performance by over 40%. 

async shaders perf

AMD is touting that its async shaders will be very non-taxing on post-processing. 

AMD says that async shaders and asynchronous compute engines are a part of the GCN DNA, so developers will be able to take advantage of them with the next generation APIs. With AMD pouring its Mantle learnings into Vulkan, the next iteration of OpenGL, the open-source API will also be able to take advantage of AMD’s asynchronous shaders. In addition, AMD tells us that all the major game engines, like Epic’s Unreal Engine and Crytek’s CryEngine, will be able to take advantage of AMD’s asynchronous shaders. 

According to AMD, the PlayStation 4, which uses AMD hardware, is already uses asynchronous shaders in games like InFamous Second Son and The Tomorrow Children to get more performance efficiencies and the company believes these learnings will work their way over to the PC with the next-generation APIs. AMD also says the philosophy behind its asynchronous shaders will also apply to the company’s GCN-based APUs. 

16 Mar 01:38

Cody Wilson Wants To Help You Make a Gun

by samzenpus
An anonymous reader writes In 2013 Cody Wilson posted online the design files needed to 3D print weapons. The files were downloaded at least 100,000 times before the U.S. State Department ordered him to take them down. Last fall he reemerged with a new project, the Ghost Gunner--a relatively small and affordable CNC milling machine that could easily manufacture the lower receiver of an AR-15. It was a different approach toward the same goal of multiplying the number of firearms in the world. But are we really facing a world where backyard bunker-builders are manufacturing their own gun components? Reporter Andrew Zaleski visited Wilson to check on the status of his project. What he found was a man in the throes of small-business hell. As Wilson puts it, "It's like the nightmare of a startup with the added complication that no one will allow you to do it anyway."

Share on Google+

Read more of this story at Slashdot.

24 Dec 00:34

IBM Wants to Tap Your PC and Android Devices to Cure Ebola

by Paul Lilly

Ebola ResearchFree app puts your Android device to work crunching data when it would otherwise be idle

Scientists are hard at work looking for a definitive cure to Ebola, and you can help. More precisely, if you're willing to donate idle time power from your PC and/or Android device, you can help the cause. How? The Scripps Research Institute (TSRI) has partnered up with IBM and its World Community Grid to form a volunteer computing project called "Outsmart Ebola Together," and all you need to do is install a free app.

Once installed, your PC and/or Android phone or tablet will crunch data at times then they would otherwise be idle. Small computational assignments are sent out to participating devices, which the software then completes and returns to scientists. It's essentially another distributed computing project, like Folding@Home, and so far almost 3 million PCs and mobile devices used by more than 680,000 people and 460 institutions from 80 countries have contributed virtual supercomputing power this this and other WCG projects over the past 10 years.

"Our molecular images of the Ebola virus are like enemy reconnaissance," said Dr. Erica Ollmann Saphire of TSRI, one of the largest private biomedical research institutes in the United States. "These images show us where the virus is vulnerable and the targets we need to hit. In the Outsmart Ebola Together project, we will be able to harness World Community Grid’s virtual supercomputing power to find the drugs we need to aim at these targets."

It took just one week on the grid for the Ebola project to complete what it would have taken a single PC with a single processor around 35 year to complete, Time reports. Speed is critical in this case as Ebola continues to spread and mutate.

The caveat? One would presume lower battery life since idle time is now being put to work, however the software only does its thing when your device is pugged in and connected to Wi-Fi (to avoid data charges). Unfortunately there's no iOS version of the free app yet, though IBM is working on it.

For more information, go here.

Follow Paul on Google+, Twitter, and Facebook

15 Dec 00:08

The MythBusters take on Corning’s Gorilla Glass to see how it stacks up against the regular stuff (video)

by Mike Beasley

Screen Shot 2014-12-01 at 8.58.15 PM

The displays on most smartphones these days, including the iPhone, are made using Gorilla Glass from Corning, Inc., which is stronger and lighter than regular glass. Just how much stronger and lighter, exactly, is a question that was put to the test by MythBusters Adam Savage and Jamie Hyneman as part of a promotional campaign by Corning.

The video (embedded below) puts two phones through quick drop and scratch tests before moving on to other applications for the glass, such as car windshields. A previous entry in the series covered the bendability of Gorilla Glass.


Filed under: Tech Industry Tagged: corning, gorilla glass

Visit 9to5Mac to find more special coverage of Tech Industry, gorilla glass, and corning.

What do you think? Discuss "The MythBusters take on Corning’s Gorilla Glass to see how it stacks up against the regular stuff (video)" with our community.

02 Nov 21:03

G.Skill's Phoenix Blade 480GB PCIe SSD Sports Quad Controllers in RAID 0

by Paul Lilly

G.Skill Phoenix Blade 480GB PCIe SSDThis SSD can give your data whiplash

Today's high performance SATA 6Gbps solid state drives top out at around 590MB/s for read and write transfers. If you need something faster, you can slap a pair in a RAID 0 configuration, or switch over to PCI-Express. G.Skill chose both with its new Phoenix Blaze Series. Kicking off G.Skill's Phoenix Blade line is a 480GB PCIe SSD equipped with four LSI SF-2281 SSD controllers in a RAID 0 setup.

It's designed with a PCIe Gen 2.0 x8 interface and is capable of up to 2,000MB/s maximum read and write speeds. Sustained read and write transfers are rated at 1,900MB/s and 1,050MB/s, respectively, along with up to 90,000 4KB random read and up to 245,000 4K random write IOPS.

"By introducing extreme read and write performance, the potential of today’s multi-core processors can be unleashed, resulting in higher productivity and performance, whether you are in extreme gaming or professional design," G.Skill says.

Unfortunately G.Skill didn't announce a price for its latest SSD, though we spotted a product page on NCIXUS that's asking $800 for it. PCIe SSDs typically cost more than SATA-based drives, the latter of which have fallen below $0.50 per gigabyte, though we'll wait until the Phoenix Blade stabs through to more vendors before commenting on price.

EDIT:

G.Skill tells Maximum PC that the Phoenix Blade will be available on Newegg next week for $699.

Follow Paul on Google+, Twitter, and Facebook

02 Nov 21:03

British Army Looking For Gamers For Their Smart-Tanks

by samzenpus
concertina226 writes The U.K. branch of global defense firm General Dynamics is working on a futuristic state-of-the-art smart-tank to replace the British Army's aging armored vehicle fleet, to be delivered to the Ministry of Defense in 2020. The Scout SV armored vehicle is the first fully-digitized armored fighting vehicle to have been built for the British Army, and is far bigger and more durable than any of its existing tanks, which are now at least 20 years old. The tank comes in six variants that can be customized with a tools for different missions, and has numerous sensors, cameras, and sights to offer real-time intelligence on weather conditions, target acquisition, and reconnaissance — all crucial battlefield data required by commanders to access and direct situations. "With the capability in the Scout SV, we're really looking for the type of people who play Xbox games – tech-savvy people who are able to take in a lot of information and process it in the proper way," says Kevin Connell, the vice president for General Dynamic UK's Land Systems Regiment.

Share on Google+

Read more of this story at Slashdot.