There are some really unexpected news coming from the Bcnranking 2015 Japanese markets share report:
I want to remind you that in Japan the mirrorless market is as big as the “classic” DSLR segment. This makes the Olympus +12% jump even more impressive! The big looser is Sony that didn’t release any major sub $1,000 camera in 2015. It’s really a good news for Olympus who has been often written as “dead” in the past. Now let’s hope the PEN-F and E-M1II (Photokina) will help to keep the momentum.
On the other side Panasonic gave up the third place to Canon. And this is also a surprise if you take int account that Canon EOS-M system is quite a joke in terms of lens offerings. So I wonder what “went wrong” at Panasonic. They certainly had some nice cameras released in 2015. So maybe there is a marketing issue and not a problem in the product range?
found via Mirrorlessrumors.
Here you have the very first real world images of the new Olympus PEN-F! And those are the camera specs known so far:
Those are the specs we got from our sources:
PEN-f announce date: 27.01
50 megapixel High Res mode
Made in “honor” of the PEN-F film camera
Two kit lens: 14-42ez and 17mm 1,8
Price of kits 1497-1797Euro
The camera will be announced next week on January 27. Follow the live blogging of the event here on 43rumors on January 27 at 5-6am London time!
To get notified on all upcoming news and rumors be sure to subscribe on 43rumors here:
RSS feed: http://www.43rumors.com/feed/
Thanks to the source who shared this!
For sources: Sources can send me anonymous info at firstname.lastname@example.org (create a fake gmail account) or via contact form you see on the right sidebar. Thanks!
Rumors classification explained (FT= FourThirds):
FT1=1-20% chance the rumor is correct
FT2=21-40% chance the rumor is correct
FT3=41-60% chance the rumor is correct
FT4=61-80% chance the rumor is correct
FT5=81-99% chance the rumor is correct
Image courtesy: Mirrorlessons
The most difficult thing when using a long telephoto lens is to quickly compose your frame after spotting your flying subject. The first few times I couldn’t track anything because as soon as I looked through the viewfinder, I couldn’t find my subject anymore. All my reference points were lost. One solution is to keep your camera very close to your eye with the lens aimed at the same thing you are looking at. If you see a bird flying, you start following it before moving your eye to the viewfinder. Practice and experience also help.
With the EE-1 you can take pictures while putting some distance between yourself and the camera and keeping an eye on what’s happening around your frame, something an EVF won’t allow you to do. This is also important because you can observe how the birds behave in the air, how they change direction and where they go. Once you know how to use it, it can either help you enhance your tracking abilities (meaning one day you won’t need the EE-1 anymore) or become an inseparable companion for your wildlife photography.
Marcin Dobas also reviewed the new lens and writes:
Once again I was pleased with image stabilization (it’s not easy to keep the camera steady when you are winded after constant running).I definitely appreciated the fact that a lens, with focal length equivalent of 600mm even with the converter 840mmm, can be quite comfortably held in your hands while cross-country running. That’s a big plus. It won’t come as a surprise to many readers that I don’t particularly enjoy running after deer with a full frame 600 f4. Whatever you’re into I guess. Yet again I appreciated small size of the equipment, especially compared to a SLR. However, comparing it to 400mm f / 4 APSC, while it is smaller and lighter the advantage would not be as pronounced as in the case of FF.
First sample images of a M.Zuiko 300mm F4 taken by Photographer Ángel Lazagabaster at NamenColor.
Wasabi Bob has posted some full size photos taken with the prototype Panasonic 100-400mm lens on Flickr.
Preorder links to the two new MFT lenses:
Panasonic-Leica 100-400mm lens at Amazon, Adorama, BHphoto and Panasonic. In EU at WexUK. ParkCameras.
Olympus 300mm f/4.0 PRO lens at Amazon, Adorama, BHphoto, GetOlympus. In EU at Amazon DE. WexUK. ParkCameras.
Que ironia, um Zuínglio neoarminiano!
Free software is built by a community of hackers and activists who care about freedom. But forces outside that community affect the work done within it, for good or ill. While we at the FSF regularly deal with GNU General Public License (GPL) violators (who we always hope are just community members waiting for a proper introduction), there is another force that can have a substantial effect on user freedom: governmental policy.
Laws, regulations, and government actions can have a lasting impact on users. The GNU GPL is based in copyright but uses its power in a "copyleft" way to actually protect users from the negative impacts of copyright, patents, and proprietary license agreements. While we can sometimes turn a law on its head to make it work for users like this, other times we are forced to push back in order to guarantee their rights. In order to achieve our global mission of promoting computer user freedom and defending the rights of software users everywhere, we must often take action to petition and protest governing bodies and their regulations. For the Licensing and Compliance Lab this is particularly relevant to our work, as these rules can affect how the licenses published by the FSF protect users. 2015 was a year filled with such actions, and 2016 will see much of the same. While our work this past year often involved issues with the US government, the scope of our work is global. As our worldwide actions on the Trans-Pacific Partnership (TPP) and other international agreements demonstrate, bad laws in the US have a tendency to spread around the globe. We work to educate the US public about problematic laws and regulations here, and we also work with supporters and partner organizations in countries around the world to achieve the same goals in their countries.
We want to take a moment to look back on the work we've done on the licensing team pushing for policies that protect users, and fighting to stop laws and regulations that would harm them.
As we explain on our international trade issue page "The FSF has been warning users of the dangers of the Trans-Pacific Partnership (TPP) for many years now. The TPP is an agreement negotiated in secret nominally for the promotion of trade, yet entire chapters of it are dedicated to implementing restrictions and regulations on computing and the Internet."
But the TPP is not the only threat looming. In October, FSF's Donald Robertson gave a talk at SeaGL outlining the threats from the alphabet soup of international "trade" agreements. A widening web of negotiations is criss-crossing the globe seeking to implement many of the same terrible restrictions found in TPP.
But we are of course not alone in our opposition to TPP. We worked together with dozens of other groups during the year. In November, we supported a rally and hackathon put on by our friends at the Electronic Frontier Foundation. They currently have another action helping people to contact Congress in the US, telling them to stop TPP. This year, we will have much more to do in order to stop TPP and many TPP clones in the future.
One of the biggest actions we took in 2015 involved fighting back against the DMCA's anti-circumvention provisions. We explained the issue back in April of 2015:
Every three years, supporters of user rights are forced to go through a Kafkaesque process fighting for exemptions from the anti-circumvention provisions of the DMCA... In short, under the DMCA's rules, everything not permitted is forbidden. Unless we expend time and resources to protect and expand exemptions, users could be threatened with legal consequences for circumventing the digital restrictions management (DRM) on their own devices and software and could face criminal penalties for sharing tools that allow others to do the same. Exemptions don't fix the harm brought about by the DMCA's anti-circumvention provisions, but they're the only crumbs Congress deigned to throw us when they tossed out our rights as users.
In the year's round of exemption proposals, we called for the repeal of these provisions and supported every proposed exemption. We called out the companies, organizations and government agencies that tried to lock users down by opposing these exemptions. When the Copyright Office failed to grant all proposed exemptions, we explained how the process was broken and called again for the repeal of the onerous law.
On this front, we had some success, as Congress and the Copyright Office are starting to listen. 2015 ended with the Copyright Office asking for public comments about the DMCA's anti-circumvention provisions and the exemptions process, noting many of the criticisms we levied throughout the year. In 2016, the fight continues. We'll need your help to end the nightmare of these restrictions and their broken exemption process, rather than simply patch over the problems they create.
Unfortunately, the DMCA isn't the only government policy seeking to lock down devices and restrict the ability of users to control their own computing. In 2015, the US Federal Communications Commission (FCC) announced the proposal of new rules requiring manufacturers to implement locks on all wireless devices. The FCC is charged with divvying up wireless spectrum in the US, and works to enforce regulations ensuring that devices do not exceed their mandated spectrum. But in trying to achieve that goal, they proposed rules that would in practice encourage device manufacturers to cripple their wireless-enabled hardware so that users could no longer install free software on those devices.
So the FSF and our allies fought back, starting a campaign to Save WiFi. The coalition came together and filed over 3,000 public comments in opposition to the rules. FSF licensing and compliance manager Joshua Gay and executive director John Sullivan even met with the FCC to make free software concerns heard. The work to protect WiFi continues in 2016.
Not every issue we confront in this arena is a threat to user freedom. Government policy can also work to help support free software, as we are seeing with the US Department of Education's recent push to upgrade the rules around grant-funded educational works. In October of 2015, the Department of Education called for comments on its proposed regulations, which were intended to create greater access and sharing by requiring grant-funded works to be under a free license. There was just one hitch — the regulations as proposed didn't quite get the job done, because they didn't explicitly require the freedom for downstream users to redistribute modified copies of the works. So we rallied users and free software activists to provide feedback to the Department of Education on the new rules. While no decision has yet been announced, we're excited about this new policy and our ability to help shape it to ensure that user freedom is enjoyed by all.
While 2015 was a big year in working to improve government policy, much still needs to be done in the year ahead. The fight to stop TPP still goes on, and other "trade" agreements loom on the horizon. For the DMCA, our voice was heard in 2015, but now we need to actually bring about the necessary changes. The FCC-instigated lockdown of wireless devices still hangs over our head. We will continue to fight for the rights of users on these issues, and any new ones that spring up.
But as our work in 2015 shows, we can't do it alone. We need the help of other organizations and activists to keep up the fight. And we need you as well. Our actions would mean nothing without your voice joining in to amplify and spread the message.
In addition to supporting our actions and making your voice heard, you can help fund the work we do to amplify your concerns. Can you support this important work by making a donation to the Free Software Foundation? You can make a long-term commitment to help the FSF sustain and grow the program for years to come by becoming an associate member for as little as $10/month (student memberships are further discounted). Membership offers many great benefits, too. Other ways you can help:
What if all of the sun's output of visible light were bundled up into a laser-like beam that had a diameter of around 1m once it reaches Earth?
Here's the situation Max is describing:
If you were standing in the path of the beam, you would obviously die pretty quickly. You wouldn't really die of anything, in the traditional sense. You would just stop being biology and start being physics.
When the beam of light hit the atmosphere, it would heat a pocket of air to millions of degreesFahrenheit, Celsius, Rankine, or Kelvin—it doesn't really matter. in a fraction of a second. That air would turn to plasma and start dumping its heat as a flood of x-rays in all directions. Those x-rays would heat up the air around them, which would turn to plasma itself and start emitting infrared light. It would be like a hydrogen bomb going off, only much more violent.
This radiation would vaporize everything in sight, turn the surrounding atmosphere to plasma, and start stripping away the Earth's surface.
But let's imagine you were standing on the far side of the Earth. You're still definitely not going to make it—things don't turn out well for the Earth in this scenario—but what, exactly, would you die from?
The Earth is big enough to protect people on the other side—at least for a little bit—from Max's sunbeam, and the seismic waves from the destruction would take a while to propogate through the planet. But the Earth isn't a perfect shield. Those wouldn't be what killed you.
Instead, you would die from twilight.
The sky is dark at night because the Sun is on the other side of the Earth. But the night sky isn't always completely dark. There's a glow in the sky before sunrise and after sunset because, even with the Sun hidden, some of the light is bent around the surface by the atmosphere.
If the sunbeam hit the Earth, x-rays, thermal radiation, and everything in between would flood into the atmosphere, so we need to learn a little about how different kinds of light interact with air.
Normal light interacts with the atmosphere through Rayleigh scattering. You may have heard of Rayleigh scattering as the answer to "why is the sky blue." This is sort of true, but honestly, a better answer to this question might be "because air is blue." Sure, it appears blue for a bunch of physics reasons, but everything appears the color it is for a bunch of physics reasons.When you ask, "Why is the statue of liberty green?" the answer is something like, "The outside of the statue is copper, so it used to be copper-colored. Over time, a layer of copper carbonate formed (through oxidation), and copper carbonate is green." You don't say "The statue is green because of frequency-specific absorption and scattering by surface molecules."
When air heats up, the electrons are stripped away from their atoms, turning it to plasma. The ongoing flood of radiation from the beam has to pass through this plasma, so we need to know how transparent plasma is to different kinds of light. At this point, I'd like to mention the 1964 paper Opacity Calculations: Past and Future, by Harris L. Mayer, which contains the single best opening paragraph to a physics paper I've ever seen:
Initial steps for this symposium began a few billion years ago. As soon as the stars were formed, opacities became one of the basic subjects determining the structure of the physical world in which we live. And more recently with the development of nuclear weapons operating at temperatures of stellar interiors, opacities become as well one of the basic subjects determining the processes by which we may all die.
Compared to air, the plasma is relatively transparent to x-rays. The x-rays would pass through the plasma, heating it through effects called Compton scattering and pair production, but would be stopped quickly when they reached the non-plasma air outside the bubble. However, the steady flow of x-rays from the growing pocket of superhot air closer to the beam would turn a steadily-growing bubble of air to plasma. The fresh plasma at the edge of the bubble would give off infrared radiation, which would head out toward the horizon (along with the infrared already on the way), heating whatever it finds there.
This bubble of heat and light would wrap around the Earth, heating the air and land as it went. As the air heated up, the scattering and emission from the plasma would cause the effects to propogate farther and farther around the horizon. Furthermore, the atmosphere around the beam's contact point would be blasted into space, where it would reflect the light back down around the horizon.
Exactly how quickly the radiation makes it around the Earth depends on many details of atmospheric scattering, but if the Moon happened to be half-full at the time, it might not even matter.
When Max's device kicked in, the Moon would go out, since the sunlight illuminating it would be captured and funneled into a beam. Slightly after the beam made contact with the atmosphere, the quarter moon would blink out.
When the beam from Max's device hit the Earth's atmosphere, the light from the contact point would illuminate the Moon. Depending on the Moon's position and where you were on the Earth, this reflected moonlight alone could be enough to burn you to death ...
... just as the twilight wrapped around the planet, bringing on one final sunrise.Here's an image which is great for annoying a few specific groups of people:
There's one thing that might prevent the Earth's total destruction. Can Max's mechanism actually track a target? If not, the Earth could be saved by its own orbital motion. If the beam was restricted to aiming at a fixed point in the sky, it would only take the Earth about three minutes to move out of the way. Everyone on the surface would still be cooked, and much of the atmosphere and surface would be lost, but the bulk of the Earth's mass would probably remain as a charred husk.
The Sun's death ray would continue out into space. Years later, if it reached another planetary system, it would be too spread out to vaporize anything outright, but it would likely be bright enough to heat up the surfaces of the planets.
Max's scenario may have doomed Earth, but if it's any consolation, we wouldn't necessarily die alone.
So you have a multi-tenant SaaS application that is using PostgreSQL as a Database of choice. As you are serving multiple customers, how do you protect each customer’s data? How do you provide full data isolation (logical and physical) between different customers? How do you minimize impact of attack vectors such as SQL Injection? How do you retain the flexibility to potentially move the customer to a higher hosting tier or higher SLAs?
Instead of putting every customer’s data in one database, simply create one database per customer. This allows for physical isolation of data within your Postgres cluster. So, for every new customer that registers, do this as part of the workflow:
CREATE DATABASE customer_A WITH TEMPLATE customer_template_v1;
In the example above
customer_template_v1 is a custom database template with all the tables, schemas, procedures pre-created.
Note: You can use Schema or Row Level Security (v9.5) to effect isolation. However, Schema and Row Level Security would only allow for logical isolation. You could go the other extreme and use a DB cluster (as opposed to a database) per customer to effect complete data isolation. But the management overhead makes it a less than ideal option in most cases.
After the Database is created as mentioned above, create a unique Database user as well. This user only would have permission to one (and only one) database:
CREATE ROLE customer_A_user with option NOSUPERUSER NOCREATEDB LOGIN ENCRYPTED PASSWORD '' REVOKE ALL ON DATABASE customer_A FROM PUBLIC; GRANT CONNECT ON DATABASE customer_A TO customer_A_user; GRANT ALL ON SCHEMA public TO customer_A_user WITH GRANT OPTION;
Now, in your middleware code, make sure to connect to
customer_A database only using
customer_A_user. In other words, when a user from
customer_A organization logs into your SaaS application, use appropriate database and database user name.
If you wish, you can even create separate READ and WRITE users. So, to create a read user for database:
CREATE ROLE customer_A_read_user with option NOSUPERUSER NOCREATEDB LOGIN ENCRYPTED PASSWORD '' GRANT USAGE ON SCHEMA public TO customer_A_read_user; GRANT CONNECT, TEMPORARY ON DATABASE customer_A TO customer_A_read_user; GRANT SELECT ON ALL TABLES IN SCHEMA public TO customer_A_read_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO customer_A_read_user;
With the above you have fine grained control in terms of database access privileges and every activity from the middleware needs to decide carefully as to which role (read or read/write) needs to be used for access.
So, what DB User/Role do you use to create the new customer database in the first place? Create a special DB User (say
create_db_user) just for this purpose. Audit and monitor this user’s activity closely. Don’t use this DB User for anything else. Or you can create a new user for each new database and simply specify that at database creation time. Whatever happens, don’t use the Postgres root user for your web connections!
CREATE ROLE customer_B_user with option NOSUPERUSER NOCREATEDB NOCREATEROLE LOGIN ENCRYPTED PASSWORD 'ABGF$%##89'; CREATE DATABASE customer_B WITH TEMPLATE customer_template_v1 OWNER=customer_B_user;
As you may have noticed, a number of SaaS applications give vanity URLs (example: https://customerA.example.com) to their customers. Some other SaaS applications have a concept of ‘customerId’ which is a required field for authentication into SaaS application. The benefit is two fold:
If you are doing any encryption within the database (say with
pgcrypto), make sure to use separate encryption keys for each customer. This adds cryptographic isolation between your customer data. Finally, when it comes to encryption and key management, avoid these common encryption errors developers keep making.
Comment and do let us know what other best practices make sense for multi-tenant SaaS access with PostgreSQL.
As such, there’s really no “standard” benchmark that will inform you about the best technology to use for your application. Only your requirements, your data, and your infrastructure can tell you what you need to know.
NoSql is everywhere and we can't escape from it (although I can't say we want to escape). Let's leave the question about reasons outside this text, and just note one thing - this trend isn't related only to new or existing NoSql solutions. It has another side, namely the schema-less data support in traditional relational databases. It's amazing how many possibilities hiding at the edge of the relational model and everything else. But of course there is a balance that you should find for your specific data. It can't be easy, first of all because it's required to compare incomparable things, e.g. performance of a NoSql solution and traditional database. Here in this post I'll make such attempt and show the comparison of jsonb in PostgreSQL, json in Mysql and bson in Mongodb.
jsonbwith slightly extended support in the upcoming release PostgreSQL 9.5
and several other examples (I'll talk about them later). Of course these data types supposed to be binary, which means great performance. Base functionality is equal across the implementations because it's just obvious CRUD. And what is the oldest and almost cave desire in this situation? Right, performance benchmarks! PostgreSQL and Mysql were choosen because they have quite similar implementation of json support, Mongodb - as a veteran of NoSql. An EnterpriseDB research is slightly outdated, but we can use it as a first step for the road of a thousand li. A final goal is not to display the performance in artificial environment, but to give a neutral evaluation and to get a feedback.
pg_nosql_benchmark from EnterpriseDB suggests an obvious approach - first of all the required amount of records must be generated using different kinds of
data and some random fluctuations. This amount of data will be saved into the database, and we will perform several kinds of queries over it.
pg_nosql_benchmark doesn't have any functional to work with Mysql, so I had to implement it similar to PostgreSQL.
There is only one tricky thing with Mysql - it doesn't support json indexing directly, it's required to create virtual columns and create index on them.
Speaking of details, there was one strange thing in
pg_nosql_benchmark. I figured out that few types of generated records
were beyond the 4096 bytes limit for mongo shell, which means these records were
just dropped out. As a dirty hack for that we can perform the inserts from a
js file (and btw, that file must be splitted into the series of chunks
less than 2GB).
Besides, there are some unnecessary time expenses, related to shell client, authentication and so on. To estimate and exclude them I have to perform corresponding amount of "no-op" queries for all databases (but they're actually pretty small).
After all modifications above I've performed measurements for the following cases:
Each of them was tested on a separate
m4.xlarge amazon instance with the
ubuntu 14.04 x64 and default configurations,
all tests were performed for 1000000 records. And you shouldn't forget about the instructions for the
postgresql-server-dev-9.5 must be installed. All results were saved in json file,
we can visualize them easily using matplotlib (see here).
Besides that there was a concern about durability. To take this into account I made few specific configurations (imho some of them are real, but some of them are quite theoretical, because I don't think someone will use them for production systems):
All charts presented in seconds (if they related to the time of query execution) or mb (if they related to the size of relation/index). Thus, for all charts the smaller value is better.
Update is another difference between my benchmarks and
pg_nosql_benchmark. It can bee seen, that Mongodb is an obvious
leader here - mostly because of PostgreSQL and Mysql restrictions, I guess, when to update one value you must override an entire field.
As you can guess from documentation and this answer,
writeConcern j:true is the highest possible transaction durability level (on a single server),
that should be equal to configuration with
I'm not sure about durability, but
fsync is definitely slower for update operations here.
Performance measurement is a dangerous field especially in this case. Everything described above can't be a completed benchmark, it's just a first step to understand current situation. We're working now on ycsb tests to make more finished measurements, and if we'll get lucky we'll compare the performance of cluster configurations.
It looks like I'll participate in the PgConf.Russia this year, so if you're interested in this subject - welcome.
Ring is multi-media communication platform with secured multi-media channels, that doesn't require centralized servers to work. It is developed by Savoir-faire Linux, a Canadian company located in Montréal, Québec. It is a potential free-software replacement for Skype, and possibly more.
The way everyone perceives the world changed when Edward Snowden, Wikileaks and others started to massively warn the public about global surveillance made by our states, network control companies and so on. The need of software solutions that give back the control to user has never been as urgent as before. As citizens of democracies and professionals of free software, we are worried about how frequently this concentration of our private data is controlled by monolithic internet giants. This is a real problem for the real global economy and makes a serious roadblock to innovation. These are our main reasons to make something different.
In conjunction with these concerns, Savoir-faire Linux developed another project just before: SFLPhone. It was only usable in a centralized concept (SIP or IAX servers was necessary), and communication wasn't secured by encryption/authentication. This first experience was a good starting point to propose something more evolved. Even though Ring is currently in alpha version, it allows decentralization and secure communication, whatever the media exchanged.
All these matters are our guiding rules and we invite developers who want to join this project to contribute.
Within Savoir-faire Linux, we use it as our main phone, some even only use Ring to make calls. We also use it as a video conferencing tool for our daily communications with our different branches over the world.
In our team, one of us has even built an Arduino based circuit to connect Ring to the lights of his house: he can turn them on/off from distance.
Our great beta-testers (that can be even our mum, dad, friends, ...) use Ring to make calls and they mostly use the instant message feature that they like a lot. It's fantastic to hear when someone outside of our daily job environment gives Ring a try and judges it awesome, even in the current stage alpha stage. The instant messaging is particularly used by such users.
As we're also active in local meetups (here at Montreal), where some free-software enthusiasts are present, to demonstrate our technology. First impressions make us confident in our decisions.
Ring is a particular piece of software for at least three reasons:
The interoperability over communication protocols and the goal to make it available for anyone, not just a group of crypto-savvy enthusiasts. We target the universality concept into Ring: as an example, we don't support XMPP yet, but there is nothing in the design that could block it's implementation. Developers are welcome!
The fact we intend to support decentralization (DHT) and industrial standards like SIP in the same application.
Our layered design permits the use of its low-level core to make something completely different from our current clients. Ring allows prospects with the Internet of Things.
Ring comes from SFLPhone, already under GPLv3 license. It was always obvious to use it for Savoir-faire Linux, especially when we are looking at the goals of the Ring software. So it has always be evident to use GLPv3 if we think about the goals of the software itself:
Only GPL can give the necessary guarantees to achieve that.
The most important is to use our available front-ends for your daily usage. Replace non-free solutions and grow the mesh: it's so important due to the distributed nature of Ring. We need to grow to securise (secure) the DHT mesh.
Immediately after this, comes the translation of Ring: better accessibility is a key of the wide usage success. Then we're free software, so the code is ready for «happy-hacking» . We're waiting for security analysis, enhancements, patchsets, ... pick up the code and let us know what you think about it. For sure we try to do that everyday, but we're a relatively small team to realize all these tasks.
Community, we're waiting for you help!
For that we propose various public tools:
General information: https://ring.cx : find binary packages, documentation links, news, ...
IRC : #ring on freenode.org
Patchset review and repository firewall : https://gerrit-ring.savoirfairelinux.com
Online translation: https://www.transifex.com/savoirfairelinux/ring/
As Ring is in alpha development stage, we always have ideas to improve or enhance it, so the list is long. In the immediate future we want to deliver an Android version of our front-end. After that we have discussion channels (chat) in a generic way (during a call, a conference, "out-of-call", ...). Then fast and secure file sharing (that comes after a way to propose a warranted generic data stream channel). Right after, we are working on a way to provide a "distributed services" framework: DNS, routing or whatever anyone is able to dream about the final usage, we want to provide a solid solution to make it real. We plan to enter in beta stage in the early of part 2016 sprint.
Enjoy this interview? Check out our previous entry in this series, featuring Michael Lissner and Brian Carver of RECAP The Law.
PostgreSQL 9.5 has been released with lots of new features for the database management system, including UPSERT, row-level security, and several "big data" features. We previewed some of these features back in July and August. "A most-requested feature by application developers for several years, 'UPSERT' is shorthand for 'INSERT, ON CONFLICT UPDATE', allowing new and updated rows to be treated the same. UPSERT simplifies web and mobile application development by enabling the database to handle conflicts between concurrent data changes. This feature also removes the last significant barrier to migrating legacy MySQL applications to PostgreSQL."
I once believed that multi-protocol IM clients were a stopgap on the road to an inevitable federated XMPP future. https://t.co/lSQm80ugwe
I'm sitting on a train traveling from Illinois to California, the long stretch of a journey from Madison to San Francisco. Morgan sits next to me. We are staring out the windows of the observation deck of this train as we watch the snow covered mountains pass by. I am feeling more relaxed and at peace than I have in years.
2016 is opening in a big way for me. As you may have heard (I mentioned it in the last State of the Goblin post) MediaGoblin was accepted into the Stripe Open Source Retreat program. Basically, Stripe gives us no-strings-attached funding for me to advance our work on MediaGoblin, but they wanted me to work from their office during that time. Seems like quite a deal to me! Unfortunately it does mean leaving Morgan behind in Madison for that time period. But that's why we splurged on a fancy train car and why she's joining me in San Francisco for the first week, so we can spend some quality time together. (Plus, Morgan has a conference that first week in San Francisco anyway; double plus, Amtrak has an extremely generous baggage policy so I'm able to get all of the belongings I need for that period shipped along with me fairly easily.) Morgan and I have been talking about but not really taking a vacation for a while, so we decided the moving-scenery approach would be a nice way to do things. It's great... we're mostly reading and drinking tea and staring out the window at the beautiful passings-by. I could hardly imagine a nicer send-off. (So yeah, if you're considering taking such a journey with your loved ones, I recommend it.)
The passage of scenery leads to reflection on the passage of time. Now seems a good time to write a bit about 2015 and what it meant. It was a very eventful year for me. I have come recently to explain to people that "I live a magical and high-stress life"; 2015 evoked that well. From a personal standpoint, Morgan and I's relationship runs strong, maybe stronger than ever, and I am thankful for that. From the broader family standpoint, the graph advances steady at times with strong peaks and valleys, perhaps more pronounced than usual. Love, gain, success, loss... it feels that everything has happened this year. Our lives have also been rearranged dramatically in an attempt to help a family member in a time of need, and that has its own set of peaks and valleys, as is to be expected. But that is the stuff of life, and you do what you can when you can, and you try your best, and you hope that others will try their best, what happens from there happens, and you use it to plan the next round of doing the best you can.
That's all very vague I suppose, but many things feel too private to discuss so publicly. Nonetheless, I wanted to record the texture of the year.
So what in the way of, you know, that thing we call a "career"? Well, it has continued to be magical, in the way that I have had a lot of freedom to explore things and address issues I really care about. Receiving an award (particularly since I did not know I had even been a candidate ahead of being notified that I received it) has also been gratifying and reassuring in some ways; I regularly fear that I am not doing well enough at advancing the issues I care about, but clearly some people do, and that's nice. It has also continued to be high stress, in that the things I worry about feel very high stakes on a global level, and that the difficulty of accomplishing them also feels very strong, and of course many are not there yet. Nonetheless, there has been a lot of progress this year, though it has come with a worrying increase of scope in the number of things I am attempting to accomplish.
We're much nearer to 1.0 on MediaGoblin, which is a huge relief. Of course, this is mostly due to Jessica Tallon's hard work on getting federation in MediaGoblin working, and other MediaGoblin community memebers doing many other interesting things. Embarassingly, I have done a lot less on MediaGoblin than in the last few years. In a sense, this is okay, because the money from the campaign has been going to pay Jessica Tallon, and not myself. I still feel bad about it though. The good news is that the focus time from the Stripe retreat should allow me the space and focus to hopefully get 1.0 actually out the door. So that leads to strong optimism.
The reduced time spent coding on MediaGoblin proper has been deceptive, since most of the projects I've worked on have spun out of work I believe is essential for MediaGoblin's long-term success. I took a sabbatical from MediaGoblin proper mid-year to focus on two goals: advancing federation standards (and my own understanding of them), and advancing the state of free software deployment. (I'm aware of a whiff of yak fumes here, though for each I can't see how MediaGoblin can succeed in their present state.) I believe I have made a lot of progress in both areas. As for federation, I've worked hard in participating in the W3C Social Working Group, I have done some test implementations, and recently I became co-editor on ActivityPump. On deployment, much work has been done on the UserOps side, both in speaking and in actual work. After initially starting to try to use Salt/Ansible as a base and hitting limitations, then trying to build my own Salt/Ansible'esque system in Hy and then Guile and hitting limitations there too, I eventually came to look into (after much prodding) Guix. At the moment, I think it's the only foundation solid enough on which to build the tooling to get us out of this mess. I've made some contributions, albeit mostly minor, have begun promoting the project more heavily, and am trying to work towards getting more deployment tooling done for it (so little time though!). I'm also now dual booting between GuixSD and Debian, and that's nice.
(Speaking of, towards the end of the year I switched to a Minifree x200 on which I'm dual booting Debian and Guix. I believe this puts me much deeper into the "free software vegan" territory.)
I also believe that over the last year I have changed dramatically as a programmer. For nearly ten years I identified as a "python web developer", but I believe that identity no longer feels like an ideal description. One thing I have always been self conscious of is how little I've known about deeper computer science fundamentals. This has changed a lot, and I believe much of it has been spending so much time in the Guile and Scheme communities, and reading the copious interesting literature that is available there. My brother Steve and I also now often meet together and watch various programming lectures and discuss them, which has been both illuminating and also a great way to understand a side of my brother I never knew. It's a nice mix; I'm a very get-things-done person, he's a very theoretical person, and we're meeting partway in the middle and I think both of us are stretching our brains in ways we hadn't before. I feel like a different programmer than I was. A year and a half ago, I remember being on a bike ride with Steve and I remember complaining to him that I didn't understand why functional programmers are so obsessed with immutability... mutation is so useful, I exclaimed! Steve paused and said very carefully, "Well... mutation brings a lot of problems..." but I just didn't understand what he was getting at. Now I look back on that bike ride and wonder at the former-me taking that position.
(All that said though, I'm glad that I've had the background I have of being a "python web developer" first, for a matter of perspective...)
I do feel that much has changed in my life in this last year. There were hard things, but overall, life has been good to me, and I still am doing what I believe in and care about. Not everyone has that opportunity. And this train ride already points the way to a year that should be productive, and will certainly be eventful.
Anyway, that's enough navel-gazing-reflection, I suppose. One more navel-gaze: here's to the changed person on the other end of 2016. I hope I can do them justice. And I hope you can do yourself justice in 2016 too.
Preorder links to the two new MFT lenses:
Panasonic-Leica 100-400mm lens at Amazon, Adorama, BHphoto and Panasonic. In EU at WexUK.
Olympus 300mm f/4.0 PRO lens at Amazon, Adorama, BHphoto, GetOlympus. In EU at Amazon DE. WexUK.
SLRgear posted the full review of the new Olympus 300mm PRO lens and writes:
The Olympus 300mm ƒ/4 Pro is one heck of a lens. Simply put. After a long time in development — it was announced as “in-development” back at CP+ in 2014 — the biggest, brightest supertelephoto lens of the Micro Four Thirds system is, well, not so “big” after all, physically at least. An impressive feat of engineering, the folks at Olympus have managed to shrink down a 600mm-equivalent supertelephoto lens, with a constant, relatively bright ƒ/4 aperture and image stabilization into a remarkably small, comfortable, hand-holdable, weather-sealed lens.
Like Olympus’ other Zuiko Pro lenses, image quality from the 300mm ƒ/4 is thoroughly impressive on all fronts: excellent sharpness and all-around wonderful optical qualities with little to no distortion, aberrations or vignetting. What’s even more impressive is the lens’ stunning image stabilization. Combining lens-based and body-based I.S., the Olympus 300mm ƒ/4 Pro lets you capture handheld images down to shockingly slow shutter speeds. All this, plus impressive close-focusing capabilities, make the Olympus 300mm ƒ/4 Pro a stunning lens for the professional and advanced photographer looking for a top-notch wildlife, sports, stage performance and close-up lens no matter the weather or lighting conditions. It may be pricey, but this is one of the best lenses Olympus has made thus far.
The headline of Postgres 9.5 is undoubtedly: Insert… on conflict do nothing/update or more commonly known as Upsert or Merge. This removes one of the last remaining features which other databases had over Postgres. Sure we’ll take a look at it, but first let’s browse through some of the other features you can look forward to when Postgres 9.5 lands:
Pivoting in Postgres has sort of been possible as has rolling up data, but it required you to know what those values and what you were projecting to, to be known. With the new functionality to allow you to group various sets together rollups as you’d normally expect to do in something like Excel become trivial.
So now instead you simply add the grouping type just as you would on a normal group by:
SELECT department, role, gender, count(*) FROM employees GROUP BY your_grouping_type_here;
By simply selecting the type of rollup you want to do Postgres will do the hard work for you. Let’s take a look at the given example of department, role, gender:
grouping setswill project out the count for each specific key. As a result you’d get each department key, with other keys as null, and the count for each that met that department.
cubewill give you the same values as above, but also the rollups of every individual combination. So in addition to the total for each department, you’d get breakups by the department and gender, and department and role, and department and role and gender.
rollupwill give you a slightly similar version to cube but only give you the detailed groupings in the order they’re presented. So if you specified
roll (department, role, gender)you’d have no rollup for department and gender alone.
Check the what’s new wiki for a bit more clarity on examples and output
I only use foreign tables about once a month, but when I do use them they’ve inevitably saved many hours of creating a one off ETL process. Even still the effort to setup new foreign tables has shown a bit of their infancy in Postgres. Now once you’ve setup your foreign database, you can import the schema, either all of it or specific tables you prefer.
It’s as simple as:
IMPORT FOREIGN SCHEMA public FROM SERVER some_other_db INTO reference_to_other_db;
If you’re managing your own Postgres instance for some reason and running HA, pg_rewind could become especially handy. Typically to spin up replication you have to first download the physical, also known as base, backup. Then you have to replay the Write-Ahead-Log or WAL–so it’s up to date then you actually flip on replication.
Typically with databases when you fail over you shoot the other node in the head or STONITH. This means just get rid of it, completely throw it out. This is still a good practice, so bring it offline, make it inactive, but from there now you could then flip it into a mode and us pg_rewind. This could save you pulling down lots and lots of data to get a replica back up once you have failed over.
Upsert of course will be the highlight of Postgres 9.5. I already talked about it some when it initially landed. The short of it is, if you’re inserting a record and there’s a conflict, you can choose to:
Essentially this will let you have the typically experience of create or update that most frameworks provide but without a potential race condition of incorrect data.
There’s a few updates to JSONB. The one I’m most excited about is making JSONB output in psql read much more legibly.
If you’ve got a JSONB field just give it a try with:
SELECT jsonb_pretty(jsonb_column) FROM foo;
Just in time for the new year the RC is ready and you can get hands on with it. Give it a try, and if there’s more you’d like to hear about Postgres please feel free to drop me a note email@example.com.
Those of you who follow me on Twitter may be aware of my recent travails with a bogeyman I've named only as "XML". Having banged my head against this abomination of software for several days, it's venting time so buckle in for the ride.
The summer I was working at Calxeda, my friends @frozenfoxx and @Sweet_Grrl got me hooked on a tabletop game, Warmachine by the Privateer Press aka PP. At a high level since PP's digital presence is pretty last decade, Warmachine is a tabletop miniature army game in the 2.5cm to 12cm base diameter size range. Models and units of models have statistics in the classical D&D style, and players engage in deathmatch and king of the hill style games.
Compared to Warhammer 40k which is probably better known, Warmachine features a much more steampunk theme & universe, simpler rules of play, lower entry price point and a greater focus on balance of play rather than on army construction and model specialization.
In short, Warmachine is a player's game while Warhammer is a modeler's game.
Unfortunately, a few realities of this game genre assert themselves.
As a student with homework, I simply haven't been able to attend Austin's various weekly Warmachine evenings in about a year which means it's a long time since I've gotten to play a game I enjoy a lot. Unlike Starcraft or Dota I can't just drop into a game at my convenience. But what if I could?
I've previously messed with AIs for Magic: the Gathering, so the idea at some point asserted itself that I could (or even should) try to build a full up simulator for Warmachine IP issues be damned.
There are two "moving pieces" to this project. Or maybe prior art is the better term. Sebastien Laforet, here and after referred to as the Frenchman, built an Android app which is capable of rendering data files encoding the rules of play and model statistics for the game. Great! Someone else already typed all that in for me. In English, but typed it in. These files are available on github and are "just" XML. More on this in a bit.
Two other people, "Hobo" and "PG_Corwin" build
a tile set for
the VASSAL tabletop simulation
engine. Unfortunately VASSAL is just a sprite engine with no
understanding of the rules or this whole project would be
moot. Treating the module as a JAR yields a whole bunch of sprites in
/images/ tree, so there's "free" art to work with as well.
In theory this should be dead easy. The rules files are a full enumeration of every known piece in the game, and the sprites package has an image for most of them, so just wire them together into a data store of some sort and I'd be done from an assets point of view. I'd just need to well build the graphics engine and figure out how to simulate the ruleset.
Unfortunately this isn't so simple for a couple reasons.
First of all, the XML files Sebastien put together are designed to do one thing and one thing only: encode the rule text on the playing cards associated with each model and unit. Cards are generally formatted with an image, some fluff and statistics on the front and rules on the back. So for instance this recently released model, Coleman Stryker v3 has a model and stat card face as such:
So we can slap all this in XML and it works just fine (with serious elisions for length)
<warcaster id="Yz02" name="Stryker3" full_name="Lord General Coleman Stryker" qualification="Cygnar Epic Cavalry Warcaster" focus="6" warjack_points="5" fa="C" completed="true"> <basestats name="Stryker" spd="8" str="6" mat="7" rat="6" def="15" arm="16" cmd="10" hitpoints="18" immunity_electricity="true" /> <weapons> <melee_weapon name="Quicksilver MKIII" pow="8" p_plus_s="14" magical="true" reach="true"> <capacity title="DISRUPTION"> .... </capacity> </melee_weapon> <mount_weapon name="Mount" pow="10"> <capacity title="THUNDEROUS IMPACT"> .... </capacity> </mount_weapon> <ranged_weapon name="Quicksilver Blast" rng="8" rof="1" aoe="-" pow="14" magical="true" electricity="true"> <capacity title="DISRUPTION"> .... </capacity> </ranged_weapon> </weapons> <feat title="LIGHTNING CHARGE"> .... </feat> <spell name="ARCANE BOLT" cost="2" rng="12" aoe="-" pow="11" up="NO" off="YES">....</spell> <spell name="CHAIN BLAST" cost="3" rng="10" aoe="3" pow="12" up="NO" off="YES">....</spell> <spell name="ESCORT" cost="2" rng="SELF" aoe="CTRL" pow="-" up="YES" off="NO">....</spell> <spell name="FURY" cost="2" rng="6" aoe="-" pow="-" up="YES" off="NO">....</spell> <spell name="IRON AGGRESSION" cost="3" rng="6" aoe="-" pow="-" up="YES" off="NO">....</spell> <capacity title="ELITE CADRE [STORM LANCES]" type="">....</capacity> <capacity title="FIELD MARSHAL [ASSAULT]">....</capacity> <capacity title="PLASMA NIMBUS">....</capacity> </warcaster>
So we have some meaning for a
<warcaster> which is a title and a
<model>, we have
<basestats> which is common to
pretty much everything, we have a
<weapons> block with a list of
<weapon> entries each of which has some attributes (rules) named
<capacity> and this is all pretty sane. Writing a set of functions
which can walk this tag tree using
clojure.xml was really simple,
and worked great, and all you have to do is reduce over all this tag
soup with some bindings and a state object to build a representation
of a given model.
That code has been working for about 12 months, because shortly thereafter batteries of special cases appeared from the woodwork and fouled my initial efforts on this up.
See as in Magic: the Gathering, Warmachine derives a lot of well value from novelty and from mixing things up in the ruleset. This means that it's not uncommon to find things which aren't what they appear to be. The newest faction in the game has this awesome model:
The text at the bottom "Battle engine and solos" is the important
bit. The TEP itself is what is called a
Battle Engine and has a
whole battery of rules associated with that concept. However it comes
in a box with three other models which aren't part of this
Engine entity, they are independent during play hence the use of the
Solo. So... what the hell is this thing and how do we represent it?
Well this is our Frenchman's answer, again with omissions:
<battleEngine id="cocE01" full_name="Transinfinite Emergence Projector & Permutation Servitors" cost="9" qualification="Convergence Battle engine" fa="2" completed="true"> <basestats spd="5" str="10" mat="0" rat="4" def="10" arm="19" cmd="10" hitpoints="20" construct="true" pathfinder="true"/> <weapons> <ranged_weapon name="Aperture Pulse" rng="SP10" aoe="-" pow="10" rof="1" location="-"> <capacity title="AUTO FIRE">....</capacity> <capacity title="FIRING FORMULAE">....</capacity> </ranged_weapon> </weapons> <capacity title="GUN PLATFORM">....</capacity> <capacity title="SACRIFICIAL PAWN [PERMUTATION SERVITORS]">....</capacity> <capacity title="SERVITOR SATELLITES">....</capacity> <capacity title="STEADY">....</capacity> <model id="Permutation Servitors" full_name="Permutation Servitors"> <basestats spd="6" str="3" mat="5" rat="5" def="12" arm="13" cmd="0" construct="true" pathfinder="true"/> <capacity title="ORBIT">....</capacity> <capacity title="STEADY">....</capacity> </model> </battleEngine>
Did you catch that?
<battleEngine> NESTS another
<model>! From the
WHAC app, which seeks to duplicate the cards that come in the box this
makes sense. There's a card which represents the TEM, and there's
another card which gives the statistics for each of the spawnable
<battleEngine> as a tag is implicitly dealt with as
<model>, same as
<warcaster> was earlier.
Rendering in the app this looks just fine, because you render the "top" model being the big ol' TEM anyone reading the cards cares about and the Servitors are secondary and so don't occur as a primary listing.
Unfortunately this is nonsense from my point of view as a consumer
trying to extract structured information. The TEM is a
and of itself, as are the Servitors, and they come together in a
package that PP calls PIP 36028.
A similar problem applies to units. It turns out that the Frenchman
encodes units by using this implicit
<model> behavior. So a unit
will be named "Foo, Leader & Grunts", use the implicit
<basestats>, some weapons and capacities and that'll be
all. Which sorta works.
Unfortunately it falls apart in units with models that aren't all the
same. I've wasted enough length on images and listings here, but go
check out the encoding of the
if you care. The Black 13th is composed of three characters, being
Lynch, Watts and Ryan each of whom is unique. They are encoded using
<model> on their shared
<unit> to describe Lynch, the
leader, and the other two have their own nested
Again from a rendering standpoint this makes sense, but really what I want is something like this:
<unit name="the Black 13th" id="..."> <model name="Lynch"> ... </model> <model name="Ryan"> ... </model> <model name="Watts"> ... </model> </unit>
because that would make sense.
The even more degenerate case of this is unfortunately the common case
of an infantry unit: "Foo, Leader & Grunts" where the leader will get
<model> and there'll be a
<model name="grunts"> or
something in its body. This is really a problem because usually the
officer has a different sprite than the grunts, and the grunts need to
have an ID in order to have a sprite permanently associated with
them. Guess what attribute they may not have.
And don't get me started on the spelling errors and qualification mistakes and soforth I've found in these files as a result of trying to automate parsing them.
The sprites are their own little shitshow. There isn't a naming convention to all of the files as they were done by several artists over the course of years. Most of the time they're named with substrings or abbreviations for a model's name, so some regex voodoo is able to recover about a third of the model to file associations but for most of it pfft I got to spend last night hand matching data file model IDs to sprite names.
I'm down to about 250 entities which aren't associated with a sprite, and I built a three page webapp to speed up building these associations, so it could be worse but it's still painful.
Why do this? Well because the XML file that's in the root of the VASSAL module and describes all the sprite to model associations isn't actually valid XML, it's some VASSAL specific nonsense that I can't seem to parse.
So yeah. XML and other people's data can go burn.
The question now before me is whether I try to do awful things to my XML file parser so I can continue consuming Sebastien's data file format non-deal as it is in the hope that as he maintains it I can "just" import changes or whether I want to for my own sanity just refactor these files and walk away from any future work of Sebastien's in the interest of quality and sanity of parsed results.
End Of Rant
Olympus Air Review: The Future of iPhone Cameras? (Wall Street Journal).
Ted: “Finally checking in with my first bit of news I can show! I’ll soon have news for the MFT DEC PRO coming out soon too.
Could we get a quick news/blog posting on our new monitor? Was just released today. I’m certain there will be a lot of positive feedback and viewership from this announcement. I’ll send you a model if you’d like to take a look at it.
The products are the VS-1 and VS-2 FINEHD. They’re basically 1920 x 1200 resolution versions of our very popular 7 inch monitors. The VS-2 comes with additional monitoring functions like histogram, false color and volume bar. This is the higher resolution than monitors five times the price.
VS-1 FineHD: http://aputure.com/vs-1-FineHD
VS-2 FineHD: http://aputure.com/vs-2-FineHD”
Dslava: “Commercial for KM Novosibirsk. In this project I was DOP and the operator in some shots. Shot with the gh4, DJI Ronin, set of cine lenses Samyang and nikkor 50 1.2 ais, benro tripod, crane, 3x Dedolight Felloni. Сuts in Premiere, colorgrading Magic Bullit, Davinci. https://youtu.be/szTiJLZgRLY”
Ivan Mazza: “Street reportage in Dublin by GH4 https://youtu.be/FbcaS9ZyOTc”
Moro Tere: “GH4 4K in World Wingsuit League China. lens; sigma 18 – 35 & tokina 11-16 with metabones speed booster + panasonic 35 -100 https://www.youtube.com/watch?v=t5TlUM8_jbg“
Passei da indignação ao nojo.
No dia da posse de Barbosa, Guimarães defendeu mais gastos e crédito. Para ele, Tesouro deveria emprestar mais a estados e deixar dívida crescer.
The E-M5II has been selected twice at Imaging Resource best 2015 camera reward:
Once because of the best “Camera of Distinction”:
Overall, the Olympus E-M5 II takes what we loved about the E-M5 and polishes it to perfection. Combined with the ever-growing lineup of fantastic Olympus lenses, the E-M5 Mark II is a top-notch system camera for the enthusiast photographer.
And also because it has the best “Technology of Distinction”:
We’ve tested the E-M5 II’s 40-megapixel mode, and the results are very impressive. While medium-format cameras with high resolution sensors will still win on the resolution front, the level of detail the E-M5 II can deliver in its 40 megapixel mode with a good lens (and Olympus makes many such) is truly impressive. When you consider the price point, it’s a tough act to follow.
The GH4-GX8-E-M5II are among the Top 10 Compact Cameras for Travelers from National Geographic. This is the camera they mention and why they are so good:
Olympus OM-D E-M5 Mark II
Pick for Travelers: This is a new version of an Olympus OM camera that dates back to the 1970s. Here, Olympus revisits its “smaller is better” philosophy and packs in the latest high-tech features. If you like technical features, particularly when shooting cities at twilight, you’ll like the multishot 40-megapixel mode. Like the Fuji, a complete setup fits in a smaller bag.
Pro Tip: Try the articulated screen with touch-screen focusing for video and stills. The option to touch-focus and shoot is set right on the screen. This works particularly well when shooting from a low angle. After framing up the shot, you can wait for someone to walk into the frame, touch their image on the screen, and the camera will focus and immediately take the picture. —Jim Richardson, contributing photographer for National Geographic magazine and National Geographic Traveler
Panasonic Lumix DMC-GX8
Pick for Travelers: The Panasonic G series has been a photographer favorite for a few years. The cameras are loved for their small size and excellent image quality, as well as for the huge range of lenses available from Panasonic, Olympus, and Leica. An added advantage is that the micro 4/3 cameras in this series all share a common lens mount and functionality. This is a good choice if you want a higher megapixel count than the Olympus cameras offer.
Pro Tip: David Alan Harvey used an earlier version of this surprisingly tiny camera to capture many of the pictures featured in a National Geographic magazine story on North Carolina’s Outer Banks.
Panasonic Lumix DMC-GH4
Pick for Travelers: If you like a modern-looking, great-shooting still camera, this is a great choice. It has all the advantages of using the micro 4/3 format with a huge selection of lenses that don’t lock you into a particular manufacturer’s camera. But it’s a video-shooting powerhouse. The Lumix DMC-GH4 takes still-camera video into an entirely new class by shooting in ultrahigh definition, a resolution that is commonly referred to as 4K. This camera is the least expensive way to shoot ultra-HD video.
Pro Tip: To see what this camera is capable of, watch “Light of the Yucatan” by Bryan Harvey, an award-winning commercial and documentary director of photography.
Pick for Travelers: Although this camera has the very small point-and-shoot-size sensor, its other attributes more than make up for that slight handicap. It’s pocket-size and completely shockproof, freezeproof, and dustproof, as well as waterproof to 50 feet without a housing. Sometimes the best photos come from the sketchiest circumstances, and you won’t be afraid to bring this camera along—it’s one you don’t have to worry about. Olympus has added a new Tough camera, the TG-860, which incorporates a selfie-friendly, 180-degree flip screen.
Pro Tip: [I took] this camera on my first diving trip to the Great Barrier Reef. I was very impressed by the clarity of the images I got below the surface. All of my dive mates were jealous, especially the one who paid extra for housings and whose pictures weren’t as clear. —Carolyn Fox, former director of digital, Nat Geo Travel