Copy and Paste
Applies to note/chord attributes
CtrlC, Ctrl-V work for these
Copied marking is highlighted
Selection changes color when copied
Improved Acoustic Feedback
Trill makes a short trill sound on entry
Copy attributes sounds
Improved Visual Feedback
Status bar notices are animated
Characters are highlighted in Lyric Verses
Directives are made more legible when cursor is on them
For un-metered music
Music can still display in “bars”
Curved Tuplet Brackets
Cadenza on/off uses Cadenza Time, sets smaller note size, editable text
Notes without stems
Multi-line text annotation
Bold, Italic etc now apply to selection
A guard prevents accidental syntax collision
Command Center search now reliable
Standalone Multi-line text with backslash editing
Pasting into measures that precede a time signature change
The recently announced new Craft Camera is a modular device that will also be available with MFT mount. Here are two videos shwowing a bit more about how this works.
On 19 April, the European Commission published a communication on "ICT Standardisation Priorities for the Digital Single Market" (hereinafter 'the Communication'). The Digital Single Market (DSM) strategy intends to digitise industries with several legislative and political initiatives, and the Communication is a part of it covering standardisation. In general, the Free Software Foundation Europe (FSFE) welcomes the Communication's plausible approach for integrating Free Software and Open Standards into standardisation but expresses its concerns about the lack of understanding of necessary prerequisites to pursue that direction.Acknowledging the importance of Free Software
The Communication starts with acknowledging the importance of Open Standards for interoperability, innovation and access to media, cultural and educational content, and promotes "community building, attracting new sectors, promoting open standards and platforms where needed, strengthening the link between research and standardisation". The latter is closely linked to the "cloud", where the Communication states that the "proprietary solutions, purely national approaches and standards that limit interoperability can severely hamper the potential of the Digital Single Market", and highlights that "common open standards will help users access new innovative services".
As a result, the Commission concludes that by the end of 2016 it intends to make more use of Free Software elements by better integrating Free Software communities into standard setting processes in the standards developing organisations.
In the Internet of Things (IoT) domain, the Communication acknowledges the EU need for "an open platform approach that supports multiple application domains ... to create competitive IoT ecosystems". In this regard, the Commission states that "this requires open standards that support the entire value chain, integrating multiple technologies ... based on streamlined international cooperation that build on an IPR ["intellectual property rights"] framework enabling easy and fair access to standard essential patents (SEPs)".
FSFE welcomes this direction taken in the Communication, as well as the Commissioner Günther Oettinger's position, highlighted in his keynote at the Net Futures 2016, that "easy reuse of standard and open components accelerates digitisation of any business or any industry sector." Furthermore, according to the Commissioner Oettinger, Free Software standards "enable transparency and build trust."EC putting good efforts at risk
However, the attempts of the Commission to promote Open Standards and a more balanced approach towards "intellectual property rights" policies in standardisation may be seriously hampered by the Commission's stance towards FRAND licensing. In particular, the Commission sets the goal to "clarify core elements of an equitable, effective and enforceable licensing methodology around FRAND principles" which is seen as striking the right balance in standardisation and ensuring the "fair and non-discriminatory" access to standards. Furthermore, it is a well-known fact that FRAND licensing terms that in theory stand for "fair, reasonable, and non-discriminatory" terms, in practice are incompatible with most of Free Software.
In conclusion, whilst the Communication sets a positive direction towards the promotion of Open Standards and the inclusion of Free Software communities into the standardisation, this direction may be seriously limited if the Commission fails to acknowledge the incompatibility of FRAND licensing terms with Free Software licenses. This in return can in practice make a proper Free Software implementation of the standard impossible. As a result, the attempts of the Commission to achieve truly "digital single market" based on interoperability, openness and innovation will not be achieved as the significant part of innovative potential found in Free Software will be in practice excluded from standardisation.
In line with our recommendations on the DSM initiative that got well received by the Commission, FSFE believes that in order to achieve the adequate integration of Free Software communities, and the overall plausible approach towards appropriate use of Open Standards the Commission needs to avoid the harmful consequences of FRAND licensing to Free Software, and instead pursue the promotion of standards that are open, minimalistic and implementable with Free Software. These standards will give the substance to the Commission's promises to encourage Free Software communities to participate in standardisation.
There is a $2,000 price drop on the AG-AF100 superkit sold by BHphoto (Click here).
Nenhuma Oficina de Escrita Criativa tem o poder de transformar o aluno, num passe de mágica, em escritor. Nenhuma Oficina de Escrita Criativa pode conceder ao aluno, em poucas semanas, uma capacidade única para se expressar. Dizendo de outra forma, nenhum curso pode fazer com que o aluno ingresse, de repente, numa categoria iluminada de seres humanos.
Faço questão de sempre ressaltar essas impossibilidades, principalmente quando falo sobre o trabalho que desenvolvo em minha própria Oficina de Escrita Criativa — e quando abro inscrições para novas turmas, o que acontecerá no próximo dia 25 de fevereiro, quinta-feira, às 21h, durante a palestra on-line “Por que ser um escritor?”.
Mas se os milagres citados no primeiro parágrafo não acontecem, por quais motivos alguém que deseja ser escritor deve cursar uma Oficina de Escrita Criativa? Para que ela serve? A seguir, apresento 5 razões:
Estamos sempre compartilhando histórias, ainda que façamos isso, a maior parte do tempo, de forma inconsciente. Somos contadores de histórias; estamos sempre compondo narrativas e transmitindo-as a nossos familiares, a nossos amigos.
Em minha Oficina de Escrita Criativa levo o aluno para além dessa constatação — e faço com que ele, ao se tornar consciente dessa habilidade, compreenda como, em literatura, é possível carregar essas narrativas de tensão, humor, ironia, dramaticidade.
Esse trabalho de ensinar como uma narrativa pode ser mais complexa não se resume a meras técnicas para estimular a imaginação, mas abrange refletir sobre a condição humana, questionar a si próprio e observar a realidade com um novo olhar.
É preciso transformar a habilidade para contar histórias numa prática consciente.
Transformar a habilidade para contar histórias numa prática consciente exige, portanto, um aprofundamento da autoconsciência — mas também exige maior precisão ao utilizar a linguagem, bem como o estudo dos elementos que compõem uma boa história.
Ao conhecer cada um desses elementos — em contato com textos fundamentais da literatura — e analisar de que maneira importantes escritores trabalharam, o aluno desperta para a necessidade de ter uma linguagem mais rigorosa, o que não deixa de ser uma forma de clarificar o pensamento.
Isso, aliás, nos recorda o sentido da palavra “aptidão”: não apenas uma disposição inata, mas uma habilidade que, em literatura, se aperfeiçoa à medida que estudamos e exercitamos o domínio da linguagem.
Observar a realidade com um novo olhar e ampliar sua autoconsciência leva o aluno a adquirir também uma compreensão mais profunda dos outros, dos seus semelhantes. Sem ela, é impossível construir narradores e personagens convincentes.
O escritor precisa saber quem narra a história que ele deseja contar e quem a vivencia: quais os valores, os preconceitos, as contradições, os sentimentos do narrador e dos personagens?
Procuro fazer, assim, com que o aluno alcance uma nova forma de empatia, por meio da qual possa viver e analisar os fatos sob diferentes perspectivas, como se carregasse consigo diferentes “eus”.
Quando estudamos os elementos acima não a partir de teorias, mas da leitura de textos fundamentais, o aluno compreende como estilos literários diversos expressam estados de espírito ou características pessoais que podem ou não ser semelhantes.
Aula após aula, o aluno é desafiado por esses grandes autores — desafiado a, conhecendo cada um deles, criar seu próprio estilo, sua própria voz.
Trata-se de um reaprendizado da leitura, de uma reeducação da atenção — um mergulho indispensável para perceber, no texto e na realidade, os pormenores que quase sempre nos escapam.
Por fim, é fundamental saber que a escrita exige disciplina, exige um comportamento metódico. Como tudo na vida, se não aprendemos a ser perseverantes, não nos desenvolvemos. É preciso ter consciência de que escrever não é fácil — e que aptidão ou talento são inúteis se não há determinação.
Estas 5 razões resumem o trabalho que desenvolvo em minha Oficina de Escrita Criativa. Mas você pode conhecer também o depoimento de alguns de meus alunos.
The post 5 razões para cursar uma Oficina de Escrita Criativa appeared first on Rodrigo Gurgel.
IO FU' GIÀ QUEL CHE VOI SETE, E QUEL CH'I’ SON VOI ANCO SARETE
Para o venezuelano Ricardo Hausmann, não é hora de ficar em cima do muro: o país precisa de um plano crível (e isso não deve ser possível enquanto Nicolás Maduro estiver no poder)
O eu do presente reavalia a previsão do eu do passado.
Abordei 15 conhecidos nos últimos 30 dias fazendo propaganda de vagas e/ou pedindo indicações. Dos 15 que abordei, 5 eu já sabia que tinham saído do país e outros 5 eu descobri que estão saindo ou planejando a saída.
E mal fez um ano.
Agora que o dólar passou de R$3 para fins práticos e uma recessão se avizinha, voltou a ser muito interessante trabalhar no exterior.
Eles já estavam vindo buscar talentos por aqui mesmo, agora então que um salário da ordem de 100 mil dólares por ano é capaz de fazer sobrar um bom pé de meia em reais, quero ver quem (bom) vai querer ficar no Brasil.
Aquisição de (verdadeiros) talentos em tecnologia: o que já era difícil vai ficar pior.
Can you use a magnifying glass and moonlight to light a fire?
At first, this sounds like a pretty easy question.
A magnifying glass concentrates light on a small spot. As many mischevious kids can tell you, a magnifying glass as small as a square inch in size can collect enough light to start a fire. A little Googling will tell you that the Sun is 400,000 times brighter than the Moon, so all we need is a 400,000-square-inch magnifying glass. Right?
Wrong. Here's the real answer: You can't start a fire with moonlightPretty sure this is a Bon Jovi song. no matter how big your magnifying glass is. The reason is kind of subtle. It involves a lot of arguments that sound wrong but aren't, and generally takes you down a rabbit hole of optics.
First, here's a general rule of thumb: You can't use lenses and mirrors to make something hotter than the surface of the light source itself. In other words, you can't use sunlight to make something hotter than the surface of the Sun.
There are lots of ways to show why this is true using optics, but a simpler—if perhaps less satisfying—argument comes from thermodynamics:
Lenses and mirrors work for free; they don't take any energy to operate.And, more specifically, everything they do is fully reversible—which means you can add them in without increasing the entropy of the system. If you could use lenses and mirrors to make heat flow from the Sun to a spot on the ground that's hotter than the Sun, you'd be making heat flow from a colder place to a hotter place without expending energy. The second law of thermodynamics says you can't do that. If you could, you could make a perpetual motion machine.
The Sun is about 5,000°C, so our rule says you can't focus sunlight with lenses and mirrors to get something any hotter than 5,000°C. The Moon's sunlit surface is a little over 100°C, so you can't focus moonlight to make something hotter than about 100°C. That's too cold to set most things on fire.
"But wait," you might say. "The Moon's light isn't like the Sun's! The Sun is a blackbody—its light output is related to its high temperature. The Moon shines with reflected sunlight, which has a "temperature" of thousands of degrees—that argument doesn't work!"
It turns out it does work, for reasons we'll get to later. But first, hang on—is that rule even correct for the Sun? Sure, the thermodynamics argument seems hard to argue with,Because it's correct. but to someone with a physics background who's used to thinking of energy flow, it may seem hard to swallow. Why can't you concentrate lots of sunlight onto a point to make it hot? Lenses can concentrate light down to a tiny point, right? Why can't you just concentrate more and more of the Sun's energy down onto the same point? With over 1026 watts available, you should be able to get a point as hot as you want, right?
Except lenses don't concentrate light down onto a point—not unless the light source is also a point. They concentrate light down onto an area—a tiny image of the Sun.Or a big one! This difference turns out to be important. To see why, let's look at an example:
This lens directs all the light from point A to point C. If the lens were to concentrate light from the Sun down to a point, it would need to direct all the light from point B to point C, too:
But now we have a problem. What happens if light goes back from point C toward the lens? Optical systems are reversible, so the light should be able to go back to where it came from—but how does the lens know whether the light came from B or to A?
In general, there's no way to "overlay" light beams on each other, because the whole system has to be reversible. This keeps you from squeezing more light in from a given direction, which puts a limit on how much light you can direct from a source to a target.
Maybe you can't overlay light rays, but can't you, you know, sort of smoosh them closer together, so you can fit more of them side-by-side? Then you could gather lots of smooshed beams and aim them at a target from slightly different angles.
Nope, you can't do this.We already know this, of course, since earlier we said that it would let you violate the second law of thermodynamics.
It turns out that any optical system follows a law called conservation of étendue. This law says that if you have light coming into a system from a bunch of different angles and over a large "input" area, then the input area times the input angleNote to nitpickers: In 3D systems, this is technically the solid angle, the 2D equivalent of the regular angle, but whatever. equals the output area times the output angle. If your light is concentrated to a smaller output area, then it must be "spread out" over a larger output angle.
In other words, you can't smoosh light beams together without also making them less parallel, which means you can't aim them at a faraway spot.
There's another way to think about this property of lenses: They only make light sources take up more of the sky; they can't make the light from any single spot brighter,A popular demonstration of this: Try holding up a magnifying glass to a wall. The magnifying glass collects light from many parts of the wall and sends them to your eye, but it doesn't make the wall look brighter. because it can be shownThis is left as an exercise for the reader. that making the light from a given direction brighter would violate the rules of étendue.My résumé says étendue is my forté. In other words, all a lens system can do is make every line of sight end on the surface of a light source, which is equivalent to making the light source surround the target.
If you're "surrounded" by the Sun's surface material, then you're effectively floating within the Sun, and will quickly reach the temperature of your surroundings.(Very hot)
If you're surrounded by the bright surface of the Moon, what temperature will you reach? Well, rocks on the Moon's surface are nearly surrounded by the surface of the Moon, and they reach the temperature of the surface of the Moon (since they are the surface of the Moon.) So a lens system focusing moonlight can't really make something hotter than a well-placed rock sitting on the Moon's surface.
Which gives us one last way to prove that you can't start a fire with moonlight: Buzz Aldrin is still alive.
There are some really unexpected news coming from the Bcnranking 2015 Japanese markets share report:
I want to remind you that in Japan the mirrorless market is as big as the “classic” DSLR segment. This makes the Olympus +12% jump even more impressive! The big looser is Sony that didn’t release any major sub $1,000 camera in 2015. It’s really a good news for Olympus who has been often written as “dead” in the past. Now let’s hope the PEN-F and E-M1II (Photokina) will help to keep the momentum.
On the other side Panasonic gave up the third place to Canon. And this is also a surprise if you take int account that Canon EOS-M system is quite a joke in terms of lens offerings. So I wonder what “went wrong” at Panasonic. They certainly had some nice cameras released in 2015. So maybe there is a marketing issue and not a problem in the product range?
found via Mirrorlessrumors.
Here you have the very first real world images of the new Olympus PEN-F! And those are the camera specs known so far:
Those are the specs we got from our sources:
PEN-f announce date: 27.01
50 megapixel High Res mode
Made in “honor” of the PEN-F film camera
Two kit lens: 14-42ez and 17mm 1,8
Price of kits 1497-1797Euro
The camera will be announced next week on January 27. Follow the live blogging of the event here on 43rumors on January 27 at 5-6am London time!
To get notified on all upcoming news and rumors be sure to subscribe on 43rumors here:
RSS feed: http://www.43rumors.com/feed/
Thanks to the source who shared this!
For sources: Sources can send me anonymous info at email@example.com (create a fake gmail account) or via contact form you see on the right sidebar. Thanks!
Rumors classification explained (FT= FourThirds):
FT1=1-20% chance the rumor is correct
FT2=21-40% chance the rumor is correct
FT3=41-60% chance the rumor is correct
FT4=61-80% chance the rumor is correct
FT5=81-99% chance the rumor is correct
Image courtesy: Mirrorlessons
The most difficult thing when using a long telephoto lens is to quickly compose your frame after spotting your flying subject. The first few times I couldn’t track anything because as soon as I looked through the viewfinder, I couldn’t find my subject anymore. All my reference points were lost. One solution is to keep your camera very close to your eye with the lens aimed at the same thing you are looking at. If you see a bird flying, you start following it before moving your eye to the viewfinder. Practice and experience also help.
With the EE-1 you can take pictures while putting some distance between yourself and the camera and keeping an eye on what’s happening around your frame, something an EVF won’t allow you to do. This is also important because you can observe how the birds behave in the air, how they change direction and where they go. Once you know how to use it, it can either help you enhance your tracking abilities (meaning one day you won’t need the EE-1 anymore) or become an inseparable companion for your wildlife photography.
Marcin Dobas also reviewed the new lens and writes:
Once again I was pleased with image stabilization (it’s not easy to keep the camera steady when you are winded after constant running).I definitely appreciated the fact that a lens, with focal length equivalent of 600mm even with the converter 840mmm, can be quite comfortably held in your hands while cross-country running. That’s a big plus. It won’t come as a surprise to many readers that I don’t particularly enjoy running after deer with a full frame 600 f4. Whatever you’re into I guess. Yet again I appreciated small size of the equipment, especially compared to a SLR. However, comparing it to 400mm f / 4 APSC, while it is smaller and lighter the advantage would not be as pronounced as in the case of FF.
First sample images of a M.Zuiko 300mm F4 taken by Photographer Ángel Lazagabaster at NamenColor.
Wasabi Bob has posted some full size photos taken with the prototype Panasonic 100-400mm lens on Flickr.
Preorder links to the two new MFT lenses:
Panasonic-Leica 100-400mm lens at Amazon, Adorama, BHphoto and Panasonic. In EU at WexUK. ParkCameras.
Olympus 300mm f/4.0 PRO lens at Amazon, Adorama, BHphoto, GetOlympus. In EU at Amazon DE. WexUK. ParkCameras.
Que ironia, um Zuínglio neoarminiano!
Free software is built by a community of hackers and activists who care about freedom. But forces outside that community affect the work done within it, for good or ill. While we at the FSF regularly deal with GNU General Public License (GPL) violators (who we always hope are just community members waiting for a proper introduction), there is another force that can have a substantial effect on user freedom: governmental policy.
Laws, regulations, and government actions can have a lasting impact on users. The GNU GPL is based in copyright but uses its power in a "copyleft" way to actually protect users from the negative impacts of copyright, patents, and proprietary license agreements. While we can sometimes turn a law on its head to make it work for users like this, other times we are forced to push back in order to guarantee their rights. In order to achieve our global mission of promoting computer user freedom and defending the rights of software users everywhere, we must often take action to petition and protest governing bodies and their regulations. For the Licensing and Compliance Lab this is particularly relevant to our work, as these rules can affect how the licenses published by the FSF protect users. 2015 was a year filled with such actions, and 2016 will see much of the same. While our work this past year often involved issues with the US government, the scope of our work is global. As our worldwide actions on the Trans-Pacific Partnership (TPP) and other international agreements demonstrate, bad laws in the US have a tendency to spread around the globe. We work to educate the US public about problematic laws and regulations here, and we also work with supporters and partner organizations in countries around the world to achieve the same goals in their countries.
We want to take a moment to look back on the work we've done on the licensing team pushing for policies that protect users, and fighting to stop laws and regulations that would harm them.
As we explain on our international trade issue page "The FSF has been warning users of the dangers of the Trans-Pacific Partnership (TPP) for many years now. The TPP is an agreement negotiated in secret nominally for the promotion of trade, yet entire chapters of it are dedicated to implementing restrictions and regulations on computing and the Internet."
But the TPP is not the only threat looming. In October, FSF's Donald Robertson gave a talk at SeaGL outlining the threats from the alphabet soup of international "trade" agreements. A widening web of negotiations is criss-crossing the globe seeking to implement many of the same terrible restrictions found in TPP.
But we are of course not alone in our opposition to TPP. We worked together with dozens of other groups during the year. In November, we supported a rally and hackathon put on by our friends at the Electronic Frontier Foundation. They currently have another action helping people to contact Congress in the US, telling them to stop TPP. This year, we will have much more to do in order to stop TPP and many TPP clones in the future.
One of the biggest actions we took in 2015 involved fighting back against the DMCA's anti-circumvention provisions. We explained the issue back in April of 2015:
Every three years, supporters of user rights are forced to go through a Kafkaesque process fighting for exemptions from the anti-circumvention provisions of the DMCA... In short, under the DMCA's rules, everything not permitted is forbidden. Unless we expend time and resources to protect and expand exemptions, users could be threatened with legal consequences for circumventing the digital restrictions management (DRM) on their own devices and software and could face criminal penalties for sharing tools that allow others to do the same. Exemptions don't fix the harm brought about by the DMCA's anti-circumvention provisions, but they're the only crumbs Congress deigned to throw us when they tossed out our rights as users.
In the year's round of exemption proposals, we called for the repeal of these provisions and supported every proposed exemption. We called out the companies, organizations and government agencies that tried to lock users down by opposing these exemptions. When the Copyright Office failed to grant all proposed exemptions, we explained how the process was broken and called again for the repeal of the onerous law.
On this front, we had some success, as Congress and the Copyright Office are starting to listen. 2015 ended with the Copyright Office asking for public comments about the DMCA's anti-circumvention provisions and the exemptions process, noting many of the criticisms we levied throughout the year. In 2016, the fight continues. We'll need your help to end the nightmare of these restrictions and their broken exemption process, rather than simply patch over the problems they create.
Unfortunately, the DMCA isn't the only government policy seeking to lock down devices and restrict the ability of users to control their own computing. In 2015, the US Federal Communications Commission (FCC) announced the proposal of new rules requiring manufacturers to implement locks on all wireless devices. The FCC is charged with divvying up wireless spectrum in the US, and works to enforce regulations ensuring that devices do not exceed their mandated spectrum. But in trying to achieve that goal, they proposed rules that would in practice encourage device manufacturers to cripple their wireless-enabled hardware so that users could no longer install free software on those devices.
So the FSF and our allies fought back, starting a campaign to Save WiFi. The coalition came together and filed over 3,000 public comments in opposition to the rules. FSF licensing and compliance manager Joshua Gay and executive director John Sullivan even met with the FCC to make free software concerns heard. The work to protect WiFi continues in 2016.
Not every issue we confront in this arena is a threat to user freedom. Government policy can also work to help support free software, as we are seeing with the US Department of Education's recent push to upgrade the rules around grant-funded educational works. In October of 2015, the Department of Education called for comments on its proposed regulations, which were intended to create greater access and sharing by requiring grant-funded works to be under a free license. There was just one hitch — the regulations as proposed didn't quite get the job done, because they didn't explicitly require the freedom for downstream users to redistribute modified copies of the works. So we rallied users and free software activists to provide feedback to the Department of Education on the new rules. While no decision has yet been announced, we're excited about this new policy and our ability to help shape it to ensure that user freedom is enjoyed by all.
While 2015 was a big year in working to improve government policy, much still needs to be done in the year ahead. The fight to stop TPP still goes on, and other "trade" agreements loom on the horizon. For the DMCA, our voice was heard in 2015, but now we need to actually bring about the necessary changes. The FCC-instigated lockdown of wireless devices still hangs over our head. We will continue to fight for the rights of users on these issues, and any new ones that spring up.
But as our work in 2015 shows, we can't do it alone. We need the help of other organizations and activists to keep up the fight. And we need you as well. Our actions would mean nothing without your voice joining in to amplify and spread the message.
In addition to supporting our actions and making your voice heard, you can help fund the work we do to amplify your concerns. Can you support this important work by making a donation to the Free Software Foundation? You can make a long-term commitment to help the FSF sustain and grow the program for years to come by becoming an associate member for as little as $10/month (student memberships are further discounted). Membership offers many great benefits, too. Other ways you can help:
What if all of the sun's output of visible light were bundled up into a laser-like beam that had a diameter of around 1m once it reaches Earth?
Here's the situation Max is describing:
If you were standing in the path of the beam, you would obviously die pretty quickly. You wouldn't really die of anything, in the traditional sense. You would just stop being biology and start being physics.
When the beam of light hit the atmosphere, it would heat a pocket of air to millions of degreesFahrenheit, Celsius, Rankine, or Kelvin—it doesn't really matter. in a fraction of a second. That air would turn to plasma and start dumping its heat as a flood of x-rays in all directions. Those x-rays would heat up the air around them, which would turn to plasma itself and start emitting infrared light. It would be like a hydrogen bomb going off, only much more violent.
This radiation would vaporize everything in sight, turn the surrounding atmosphere to plasma, and start stripping away the Earth's surface.
But let's imagine you were standing on the far side of the Earth. You're still definitely not going to make it—things don't turn out well for the Earth in this scenario—but what, exactly, would you die from?
The Earth is big enough to protect people on the other side—at least for a little bit—from Max's sunbeam, and the seismic waves from the destruction would take a while to propogate through the planet. But the Earth isn't a perfect shield. Those wouldn't be what killed you.
Instead, you would die from twilight.
The sky is dark at night because the Sun is on the other side of the Earth. But the night sky isn't always completely dark. There's a glow in the sky before sunrise and after sunset because, even with the Sun hidden, some of the light is bent around the surface by the atmosphere.
If the sunbeam hit the Earth, x-rays, thermal radiation, and everything in between would flood into the atmosphere, so we need to learn a little about how different kinds of light interact with air.
Normal light interacts with the atmosphere through Rayleigh scattering. You may have heard of Rayleigh scattering as the answer to "why is the sky blue." This is sort of true, but honestly, a better answer to this question might be "because air is blue." Sure, it appears blue for a bunch of physics reasons, but everything appears the color it is for a bunch of physics reasons.When you ask, "Why is the statue of liberty green?" the answer is something like, "The outside of the statue is copper, so it used to be copper-colored. Over time, a layer of copper carbonate formed (through oxidation), and copper carbonate is green." You don't say "The statue is green because of frequency-specific absorption and scattering by surface molecules."
When air heats up, the electrons are stripped away from their atoms, turning it to plasma. The ongoing flood of radiation from the beam has to pass through this plasma, so we need to know how transparent plasma is to different kinds of light. At this point, I'd like to mention the 1964 paper Opacity Calculations: Past and Future, by Harris L. Mayer, which contains the single best opening paragraph to a physics paper I've ever seen:
Initial steps for this symposium began a few billion years ago. As soon as the stars were formed, opacities became one of the basic subjects determining the structure of the physical world in which we live. And more recently with the development of nuclear weapons operating at temperatures of stellar interiors, opacities become as well one of the basic subjects determining the processes by which we may all die.
Compared to air, the plasma is relatively transparent to x-rays. The x-rays would pass through the plasma, heating it through effects called Compton scattering and pair production, but would be stopped quickly when they reached the non-plasma air outside the bubble. However, the steady flow of x-rays from the growing pocket of superhot air closer to the beam would turn a steadily-growing bubble of air to plasma. The fresh plasma at the edge of the bubble would give off infrared radiation, which would head out toward the horizon (along with the infrared already on the way), heating whatever it finds there.
This bubble of heat and light would wrap around the Earth, heating the air and land as it went. As the air heated up, the scattering and emission from the plasma would cause the effects to propogate farther and farther around the horizon. Furthermore, the atmosphere around the beam's contact point would be blasted into space, where it would reflect the light back down around the horizon.
Exactly how quickly the radiation makes it around the Earth depends on many details of atmospheric scattering, but if the Moon happened to be half-full at the time, it might not even matter.
When Max's device kicked in, the Moon would go out, since the sunlight illuminating it would be captured and funneled into a beam. Slightly after the beam made contact with the atmosphere, the quarter moon would blink out.
When the beam from Max's device hit the Earth's atmosphere, the light from the contact point would illuminate the Moon. Depending on the Moon's position and where you were on the Earth, this reflected moonlight alone could be enough to burn you to death ...
... just as the twilight wrapped around the planet, bringing on one final sunrise.Here's an image which is great for annoying a few specific groups of people:
There's one thing that might prevent the Earth's total destruction. Can Max's mechanism actually track a target? If not, the Earth could be saved by its own orbital motion. If the beam was restricted to aiming at a fixed point in the sky, it would only take the Earth about three minutes to move out of the way. Everyone on the surface would still be cooked, and much of the atmosphere and surface would be lost, but the bulk of the Earth's mass would probably remain as a charred husk.
The Sun's death ray would continue out into space. Years later, if it reached another planetary system, it would be too spread out to vaporize anything outright, but it would likely be bright enough to heat up the surfaces of the planets.
Max's scenario may have doomed Earth, but if it's any consolation, we wouldn't necessarily die alone.
So you have a multi-tenant SaaS application that is using PostgreSQL as a Database of choice. As you are serving multiple customers, how do you protect each customer’s data? How do you provide full data isolation (logical and physical) between different customers? How do you minimize impact of attack vectors such as SQL Injection? How do you retain the flexibility to potentially move the customer to a higher hosting tier or higher SLAs?
Instead of putting every customer’s data in one database, simply create one database per customer. This allows for physical isolation of data within your Postgres cluster. So, for every new customer that registers, do this as part of the workflow:
CREATE DATABASE customer_A WITH TEMPLATE customer_template_v1;
In the example above
customer_template_v1 is a custom database template with all the tables, schemas, procedures pre-created.
Note: You can use Schema or Row Level Security (v9.5) to effect isolation. However, Schema and Row Level Security would only allow for logical isolation. You could go the other extreme and use a DB cluster (as opposed to a database) per customer to effect complete data isolation. But the management overhead makes it a less than ideal option in most cases.
After the Database is created as mentioned above, create a unique Database user as well. This user only would have permission to one (and only one) database:
CREATE ROLE customer_A_user with option NOSUPERUSER NOCREATEDB LOGIN ENCRYPTED PASSWORD '' REVOKE ALL ON DATABASE customer_A FROM PUBLIC; GRANT CONNECT ON DATABASE customer_A TO customer_A_user; GRANT ALL ON SCHEMA public TO customer_A_user WITH GRANT OPTION;
Now, in your middleware code, make sure to connect to
customer_A database only using
customer_A_user. In other words, when a user from
customer_A organization logs into your SaaS application, use appropriate database and database user name.
If you wish, you can even create separate READ and WRITE users. So, to create a read user for database:
CREATE ROLE customer_A_read_user with option NOSUPERUSER NOCREATEDB LOGIN ENCRYPTED PASSWORD '' GRANT USAGE ON SCHEMA public TO customer_A_read_user; GRANT CONNECT, TEMPORARY ON DATABASE customer_A TO customer_A_read_user; GRANT SELECT ON ALL TABLES IN SCHEMA public TO customer_A_read_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO customer_A_read_user;
With the above you have fine grained control in terms of database access privileges and every activity from the middleware needs to decide carefully as to which role (read or read/write) needs to be used for access.
So, what DB User/Role do you use to create the new customer database in the first place? Create a special DB User (say
create_db_user) just for this purpose. Audit and monitor this user’s activity closely. Don’t use this DB User for anything else. Or you can create a new user for each new database and simply specify that at database creation time. Whatever happens, don’t use the Postgres root user for your web connections!
CREATE ROLE customer_B_user with option NOSUPERUSER NOCREATEDB NOCREATEROLE LOGIN ENCRYPTED PASSWORD 'ABGF$%##89'; CREATE DATABASE customer_B WITH TEMPLATE customer_template_v1 OWNER=customer_B_user;
As you may have noticed, a number of SaaS applications give vanity URLs (example: https://customerA.example.com) to their customers. Some other SaaS applications have a concept of ‘customerId’ which is a required field for authentication into SaaS application. The benefit is two fold:
If you are doing any encryption within the database (say with
pgcrypto), make sure to use separate encryption keys for each customer. This adds cryptographic isolation between your customer data. Finally, when it comes to encryption and key management, avoid these common encryption errors developers keep making.
Comment and do let us know what other best practices make sense for multi-tenant SaaS access with PostgreSQL.
As such, there’s really no “standard” benchmark that will inform you about the best technology to use for your application. Only your requirements, your data, and your infrastructure can tell you what you need to know.
NoSql is everywhere and we can't escape from it (although I can't say we want to escape). Let's leave the question about reasons outside this text, and just note one thing - this trend isn't related only to new or existing NoSql solutions. It has another side, namely the schema-less data support in traditional relational databases. It's amazing how many possibilities hiding at the edge of the relational model and everything else. But of course there is a balance that you should find for your specific data. It can't be easy, first of all because it's required to compare incomparable things, e.g. performance of a NoSql solution and traditional database. Here in this post I'll make such attempt and show the comparison of jsonb in PostgreSQL, json in Mysql and bson in Mongodb.
jsonbwith slightly extended support in the upcoming release PostgreSQL 9.5
and several other examples (I'll talk about them later). Of course these data types supposed to be binary, which means great performance. Base functionality is equal across the implementations because it's just obvious CRUD. And what is the oldest and almost cave desire in this situation? Right, performance benchmarks! PostgreSQL and Mysql were choosen because they have quite similar implementation of json support, Mongodb - as a veteran of NoSql. An EnterpriseDB research is slightly outdated, but we can use it as a first step for the road of a thousand li. A final goal is not to display the performance in artificial environment, but to give a neutral evaluation and to get a feedback.
pg_nosql_benchmark from EnterpriseDB suggests an obvious approach - first of all the required amount of records must be generated using different kinds of
data and some random fluctuations. This amount of data will be saved into the database, and we will perform several kinds of queries over it.
pg_nosql_benchmark doesn't have any functional to work with Mysql, so I had to implement it similar to PostgreSQL.
There is only one tricky thing with Mysql - it doesn't support json indexing directly, it's required to create virtual columns and create index on them.
Speaking of details, there was one strange thing in
pg_nosql_benchmark. I figured out that few types of generated records
were beyond the 4096 bytes limit for mongo shell, which means these records were
just dropped out. As a dirty hack for that we can perform the inserts from a
js file (and btw, that file must be splitted into the series of chunks
less than 2GB).
Besides, there are some unnecessary time expenses, related to shell client, authentication and so on. To estimate and exclude them I have to perform corresponding amount of "no-op" queries for all databases (but they're actually pretty small).
After all modifications above I've performed measurements for the following cases:
Each of them was tested on a separate
m4.xlarge amazon instance with the
ubuntu 14.04 x64 and default configurations,
all tests were performed for 1000000 records. And you shouldn't forget about the instructions for the
postgresql-server-dev-9.5 must be installed. All results were saved in json file,
we can visualize them easily using matplotlib (see here).
Besides that there was a concern about durability. To take this into account I made few specific configurations (imho some of them are real, but some of them are quite theoretical, because I don't think someone will use them for production systems):
All charts presented in seconds (if they related to the time of query execution) or mb (if they related to the size of relation/index). Thus, for all charts the smaller value is better.
Update is another difference between my benchmarks and
pg_nosql_benchmark. It can bee seen, that Mongodb is an obvious
leader here - mostly because of PostgreSQL and Mysql restrictions, I guess, when to update one value you must override an entire field.
As you can guess from documentation and this answer,
writeConcern j:true is the highest possible transaction durability level (on a single server),
that should be equal to configuration with
I'm not sure about durability, but
fsync is definitely slower for update operations here.
Performance measurement is a dangerous field especially in this case. Everything described above can't be a completed benchmark, it's just a first step to understand current situation. We're working now on ycsb tests to make more finished measurements, and if we'll get lucky we'll compare the performance of cluster configurations.
It looks like I'll participate in the PgConf.Russia this year, so if you're interested in this subject - welcome.