Shared posts

07 Dec 18:15

Surprise

by Reza

surprise

07 Dec 18:15

Never Seen Star Wars

If anyone calls you on any weird detail, just say it's from the Jedi Prince book series, which contains so much random incongruous stuff that even most Expanded Universe/Legends fans collectively agreed to forget about it decades ago.
07 Dec 18:02

Let's Colonize Titan - Scientific American Blog Network

by brandizzi

The idea of a human colony on Titan, a moon of Saturn, might sound crazy. Its temperature hovers at nearly 300° below zero Fahrenheit, and its skies rain methane and ethane that flow into hydrocarbon seas. Nevertheless, Titan could be the only place in the solar system where it makes sense to build a permanent, self-sufficient human settlement.

We reached this conclusion after looking at the planets in a new way: ecologically. We considered the habitat that human beings need and searched for those conditions in our celestial neighborhood.

Our colonization scenario, based on science, technology, politics and culture, presents a thought experiment for anyone who wants to think about the species’ distant future.

We expect human nature to stay the same. Human beings of the future will have the same drives and needs we have now. Practically speaking, their home must have abundant energy, livable temperatures and protection from the rigors of space, including cosmic radiation, which new research suggests is unavoidably dangerous for biological beings like us.

Up to now, most researchers have looked at the Moon or Mars as the next step for human habitation. These destinations have the dual advantages of proximity and of not being clearly unrealistic as choices for where we should go. That second characteristic is lacking at the other bodies near us in the inner solar system, Mercury and Venus.

Mercury is too close to the sun, with temperature extremes and other physical conditions that seem hardly survivable. Venus’s atmosphere is poisonous, crushingly heavy and furnace-hot, due to a run-away greenhouse effect. It might be possible to live suspended by balloons high in Venus’s atmosphere, but we can’t see how such a habitation would ever be self-sustaining.

But although the Moon and Mars look like comparatively reasonable destinations, they also have a deal-breaking problem. Neither is protected by a magnetosphere or atmosphere. Galactic Cosmic Rays, the energetic particles from distant supernovae, bombard the surfaces of the Moon and Mars, and people can’t live long-term under the assault of GCRs.

The cancer-causing potential of this powerful radiation has long been known, although it remains poorly quantified. But research in the last two years has added a potentially more serious hazard: brain damage. GCRs include particles such as iron nuclei traveling at close to the speed of light that destroy brain tissue.

Exposing mice to this radiation at levels similar to those found in space caused brain damage and loss of cognitive abilities, according to a study published last year by Vipan K. Parihar and colleagues in Science Advances. That research suggests we aren’t ready to send astronauts to Mars for a visit, much less to live there.

On Earth, we are shielded from GCRs by water in the atmosphere. But it takes two meters of water to block half of the GCRs present in unprotected space. Practically, a Moon or Mars settlement would have to be built underground to be safe from this radiation.

Underground shelter is hard to build and not flexible or easy to expand. Settlers would need enormous excavations for room to supply all their needs for food, manufacturing and daily life. We ask why they would go to that trouble. We can live underground on Earth. What’s the advantage to doing so on Mars?

Beyond Mars, the next potential home is among the moons of Jupiter and Saturn. There are dozens of choices among them, but the winner is obvious. Titan is the most Earthlike body other than our original home.

Titan is the only other body in the solar system with liquid on the surface, with its lakes of methane and ethane that look startlingly like water bodies on Earth. It rains methane on Titan, occasionally filling swamps. Dunes of solid hydrocarbons look remarkably like Earth’s sand dunes.

For protection from radiation, Titan has a nitrogen atmosphere 50 percent thicker than Earth’s. Saturn’s magnetosphere also provides shelter. On the surface, vast quantities of hydrocarbons in solid and liquid form lie ready to be used for energy. Although the atmosphere lacks oxygen, water ice just below the surface could be used to provide oxygen for breathing and to combust hydrocarbons as fuel.

It’s cold on Titan, at -180°C (-291°F), but thanks to its thick atmosphere, residents wouldn’t need pressure suits—just warm clothing and respirators. Housing could be made of plastic produced from the unlimited resources harvested on the surface, and could consist of domes inflated by warm oxygen and nitrogen. The ease of construction would allow huge indoor spaces.

Titanians (as we call them) wouldn’t have to spend all their time inside. The recreational opportunities on Titan are unique. For example, you could fly. The weak gravity—similar to the Moon’s—combined with the thick atmosphere would allow individuals to aviate with wings on their backs. If the wings fall off, no worry, landing will be easy. Terminal velocity on Titan is a tenth that found on the Earth.

How will we get there? Currently, we can’t. Unfortunately, we probably can’t get to Mars safely, either, without faster propulsion to limit the time in space and associated GCR dosage before astronauts are unduly harmed. We will need faster propulsion to Mars or Titan. For Titan, much faster, as the trip currently takes seven years.

There is no quick way to move off the Earth. We will have to solve our problems here. But if our species continues to invest in the pure science of space exploration and the stretch technology needed to preserve human health in space, people will eventually live on Titan.

Charles Wohlforth and Amanda Hendrix are the authors of Beyond Earth: Our Path to a New Home in the Planets

Let's block ads! (Why?)

07 Dec 17:59

Cutting Through The Machine Learning Hype

by brandizzi
Adam Victor Brandizzi

Great introduction/hermeneutics!

Post written by

Jason Black

Jason Black is an investor at RRE Ventures focused on deals in B2B SaaS focused data, machine learning, and developer tools.

Let’s punch through the noise around machine learning. (Credit: Shutterstock)

The tech ecosystem is well acquainted with buzzwords. From “Web 2.0” to “cloud computing” to “mobile first” to “on-demand,” it seems as though each passing year heralds the advent and popularization of new catchphrases to which fledgling companies attach themselves. But while the trends these phrases represent are real, and category-defining companies will inevitably give weight to newly coined buzzwords, so too will derivative startups seek to take advantage of concepts that remain ill-defined by experts and little-understood by everyone else.

In a June post, CB Insights encapsulated the frenzy (and absurdity) of the moment: 

“It’s clear that 9 of 10 investors have very little idea what AI is so if you’re a founder raising money, you should sprinkle some AI into your pitch deck. Use of ‘artificial intelligence,’ ‘AI,’ ‘chatbot,’ or ‘bot’ are winners right now and might get you a little valuation bump or get the process to move quicker.

If you want to drive home that you’re all about that AI, use terms like machine learning, neural networks, image recognition, deep learning, and NLP. Then sit back and watch the funding roll in.”

Pitch decks and headlines today are lousy with references to “artificial intelligence” and “machine learning.” But what do those terms really mean? And how can you separate empty claims from real value creation when evaluating businesses and the technologies which underpin them? Having at least a passing knowledge of what you’re talking about is a good first step, so let’s start with the basics.

Definitions

Artificial Intelligence

The terms “artificial intelligence” and “machine learning” are frequently used interchangeably, but doing so introduces imprecision and ambiguity. Artificial intelligence, a term coined in 1956 at a Dartmouth College CS conference, refers to a line of research that seeks to recreate the characteristics possessed by human intelligence.

At the time, “General AI” was thought to be within reach. People believed that specific advancements (like teaching a computer to master checkers or chess) would allow us to learn how machines learn, and ultimately program computers to learn like we do. If we could use machines to mimic the rudimentary way that babies learn about the world, the reasoning went, soon we would have a fully functioning “grown up” artificial intelligence that could master new tasks at a similar or faster rate.

In hindsight, this was a bit too optimistic.

While the end goal of AI was — and still is — the creation of a sentient machine consciousness, we haven’t yet achieved generalized artificial Intelligence. Moreover, barring a major breakthrough in methodology, we don’t have a reasonable timeline for doing so. As a result, research (especially the types of research relevant to the VC and startup world) now focuses on a sub-field of AI known as machine learning aimed at solving individual tasks which can increase productivity and benefit businesses today.

Machine Learning

In contrast with AI’s stated goal of recreating human intelligence, machine learning tools seek to create predictive models around specific tasks. Simply put, machine learning is all about utility. Nothing too flashy, just supercharged statistics.

While there are plenty of good definitions for machine learning floating around, my favorite is Tom M. Mitchell’s 1997 definition:

“A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.”

Rather formal, but this definition is buzzword-free and gets straight to the elegance and simplicity of machine learning. Simply put, a machine is said to learn if its performance at a set of tasks improves as it’s given more data.

Need an example? How about one from your Statistics 101 course: simple linear regression. The goal (or Task) is to draw a “line of best fit” given some initial set of observed data. Through an iterative process that seeks to minimize the average distance from the regression line and the scatterplot of data (its Performance measure), linear regression improves its predictive “line of best fit” with each additional data point (Experience).

Red dots represent scatter plot of all data. The blue line minimizes average distance from the regression line (represented here by grey lines). Credit: Jason Black

Boom. Machine learning.

Given that relatively low bar, nearly any tech company can claim to be “leveraging machine learning.” So where do we go from here? To further demystify the topic, it’s also useful to understand how machine learning algorithms are developed. With linear regression, the algorithm in question simply draws a line which gets as close to as many individual data points as possible. But how about a real world example?

While the math behind more sophisticated machine learning models quickly becomes incredibly complex, the underlying concepts are often very intuitive.

Continue reading…

Developing A Machine Learning Model

Say you wanted to predict what new songs a particular Spotify user would enjoy. Follow your intuition.

You’d probably start with his or her existing library and expect that other users who have a large number of songs in common would be likely to enjoy the complement set of the songs in the other user’s library (a process called collaborative filtering). You might also analyze the acoustic elements in the user’s library to look for common traits such as an upbeat tempo or use of electric guitar (Spotify uses neural networks to do this, for example). Finally you might assign an appropriate weight to the tracks a user has listened to repeatedly, starred, or marked with a thumbs up/down.

Check out this visualization of the filters learned in the first convolutional layer of Spotify’s deep learning algorithm. The time axis is horizontal, the frequency axis is vertical (frequency increases from top to bottom). Credit: Jason Black / Spotify

All that’s left is to translate these intuitions into a mathematical representation that ingests the requisite data sources and outputs a ranked list of songs to present to the user. As the user listens, likes, and dislikes new music, these new data points (or Experience in our earlier terminology) can be fed back into the same models to update, and thus improve, that prediction list.

If you want to learn more about more complex machine learning algorithms, there are ample resources across the web that do a great job of explaining neural networks, deep learning, Bayesian networks, hidden Markov models, and many more modeling systems. But for our purposes, technical implementation is less relevant than understanding how startups create value by harnessing that technology. So let’s keep moving.

Where’s The Value?

Now that we have covered what machine learning is, for what should savvy investors and skeptical readers be on the lookout? In my experience, the initial litmus test is to walk through the three fundamental building blocks of a machine learning model (task T, performance measure P, and experience E) and look for new or interesting approaches. It is these novelties which form the basis of differentiated products and successful startups.

Experience | Unique or Proprietary Data Sets

Without data, you can’t train a machine learning model. Full stop.

With a publicly available training set, you can train a machine learning model to do specified tasks, which is great, but then you are relying on tuning and tweaking the performance of your algorithm to outperform others. If everyone is building machine learning models with the same sets of training data, competitive advantages (at least at the outset) are all but non-existent.

By contrast, a unique and proprietary data set confers an unfair advantage. Only Facebook has access to its Social Graph. Only Uber has access to the pickup/dropoff points of every rider in its network. These are data sets that only one company can use to train their machine learning models. The value of that is obvious. It’s basic scarcity of a private resource. And it can create an enormous moat.

Take *Digital Genius, as an example. The Company offers customer service automation tools and counts numerous Fortune 500 companies as clients. These relationships offer Digital Genius exclusive access to millions of historical customer service chat logs, which represent millions of appropriate responses to a wide swath of customer queries. Using this data, Digital Genius trains its Natural Language Processing (NLP) algorithms before beginning to interact with new, live customers.

In order to attain the same level of performance, a competitor would have to amass a similar number of chat logs from scratch. Practically speaking, this would require performing millions of live customer interactions, many of which would likely be frustrating and useless for the customers themselves. While the algorithm would eventually learn and improve, the model’s day one performance would be lackluster at best, and the company itself would be unlikely to gain traction in the market. Thus, having the proprietary data sets from their largest clients gives Digital Genius a real, differentiated value proposition in the chat automation space.

Of course, another way to go about gaining access to a unique data set is to capture one that has never existed. The coming wave of IoT and the proliferation of sensors promise to unlock troves of new data sets that have never before been analyzed. Companies which get proprietary access to new data sets, or those which create proprietary data sets themselves, can thus outperform the competition.

*OTTO Motors (a division of Clearpath Robotics), has captured one of the most robust data sets of indoor industrial environments on the planet from their network of autonomous materials transport robots (pictured below). Every time an OTTO robot makes its way around the factory floor, information about its environment — moving forklifts, walking workers, path obstructions — can be sent back to a centralized database. If the company then develops a more robust model to navigate around forklifts, for example, the OTTO Motors team can backtest and debug their improvements against real-world, historical environment data without needing to actually test their robots or even use physical environments.

An OTTO 1500 robot autonomously navigates around a warehouse. (Credit: Otto)

This same data-race is even more competitive on the road. The reason why the Google Self-Driving Car, Tesla Autopilot, and Uber Self-Driving teams all tout (or forecast) the number of autonomous miles driven is because each additional mile captures valuable data about changing environments that engineers can then use to test against as they improve their autonomous navigation algorithms. But relative to the global total number of miles driven per year (an estimated 3.15 trillion miles in 2015 in the US alone), only a de minimus number of those are being captured by the three projects mentioned above, leaving greenfield opportunity for startups like Cruise Automation, nuTonomy, and Zoox.

The final, and most experimental approach to leveraging unique data sets is to programmatically generate data which is then used to train machine learning algorithms. This technique is best suited for creating data sets that are difficult or impossible to collect.

Here’s an example. In order to create a machine learning algorithm to predict the direction a person is looking in a real world environment, you first have to train on sample data that has gaze direction correctly labeled. Given the literal billions of images that we have of people looking, with their eyes open, in different directions in every conceivable environment, you’d think this would be a trivial task. The data set—it would seem—already exists.

The problem is that the data isn’t labeled, and manually labeling, let alone determining, a person’s exact gaze direction based on a photograph is way too hard for a human to do to any degree of accuracy or in a reasonable length of time. Despite possessing a vast repository of images, we can’t even create good enough approximations of gaze direction for a machine to train on. We don’t have a complete, labeled set of data.

Programmatically generated eyes used to train machine learning algorithms to determine gaze direction. Credit: Jason Black

In order to tackle this problem, a set of researchers at the University of Cambridge programmatically generated renderings of an artificial eye and coupled each image with its corresponding gaze direction. By generating over 10,000 images in a variety of different lighting conditions, the researchers generated enough labeled data to train a machine learning algorithm (in this case, a neural network) to predict gaze direction in photos of people the machine had not previously encountered. By programmatically generating a labeled data set, we sidestepped the problems inherent to our existing repository of real-world data.

While means of finding, collection, or generating data on which to train machine learning models are varied, evaluating the sources of data a company has access to (especially those which competitors can’t access) is a great starting point when evaluating a startup or its technology. But there’s more to machine learning than just experience.

Continue reading…

Task | Differentiated Approaches

Just as access to a unique data set is inherently valuable, developing a new approach to a machine learning Task (T) or starting work on a new or neglected Task provide alternative paths to creating value.

DeepMind, a company Google acquired for over $500 million in 2014, developed a model generation approach that enabled them to pull ahead of the pack in a branch of machine learning known as deep learning (hence the name). While their acquisition went largely unnoticed by the mainstream press, it was difficult to miss the headlines as their machine learning algorithm dubbed “AlphaGo” squared off against the world champion of Go in early 2016.

The rules of the game of Go are relatively simple, yet the number of possible board positions in the game outnumber the atoms in the universe. Traditional machine learning techniques by themselves simply could not produce an effective strategy given the number of possible outcomes. However, DeepMind’s differentiated approach to these existing techniques enabled the team not only to best the current world champion of the game, Lee Sedol, but do so in such a way that spectators described the machine’s performance as “genius” and “beautiful.”

However, the sophistication of performance on one Task does not translate well to other domains. Use the same code from the AlphaGo project to respond to customer service enquiries or navigate around a factory floor and the performance would likely be abysmal. Practically, the approximate 1:1 ratio between Task and machine learning model means that for the short- and medium-term there are innumerable Tasks for which no machine learning model has yet been trained.

For this reason, identifying neglected Tasks can be quite lucrative and easier than one might expect. One might assume, for example, that since a significant amount of time, effort, and money has been spent on improving photo analysis, that video analysis has enjoyed the same performance gains. Not so. While some of the models from static image analysis have carried over, the complexity associated with moving images and audio has discouraged development, especially as plenty of low hanging fruit in the photo identification space still remains.

*Dextro’s Stream API annotating live Periscope videos in real time. (Credit: Dextro)

This created a great opportunity for *Dextro and Clarifai to quickly pull out ahead in applying machine learning to understand the content in videos. These advancements in video analysis now enables video distributors to create searchable videos based on not just the manually submitted metadata from the users who upload, but also the content contained within the video like an automatically generated transcript, an auto-classified categorization label, and even individual objects or concepts that appear throughout the video.

Performance | Step Function Improvement

The final major value driver for startups seeking to harness machine learning technology is meaningfully outperforming the competition at a known Task.

One great example is Prosper which makes loans to individuals and SMBs. Their Task is the same as any other lender on the market — to accurately evaluate the risk of lending money to a particular individual or business. Given that Prosper and their peers in both the alternative and the traditional lending world live or die by their ability to predict creditworthiness, Performance (P) is absolutely critical to the success of their business. So how do relatively young alternative lenders outperform even the largest financial institutions out there?

Instead of taking in tens of data points about a particular borrower Prosper draw an order of magnitude more data. In addition to using a larger and differentiated data set, the two newcomers have been rigorously scouring research papers and doing their own internal development in order to incorporate bleeding edge machine learning algorithms to their data sets. Together, the Performance characteristics of the resulting machine learning models represent a unique and differentiated ability to issue profitable loans to a whole group of consumers who have historically been turned away by legacy institutions.

Being able to judge the performance of a startup’s machine learning models against that of the competition is another great way to cull the most innovative companies and separate out the mere peddlers of hype and buzz.

Back To Business

To be clear, there’s much more to machine learning than hyped-up pitch decks and empty promises. The trick is culling the wheat from the chaff. Armed with clear definitions and a working knowledge of the simple concepts underlying the buzzwords and headlines, go forth and pick through presentations with confidence!

But remember this caveat.

Yes, machine learning — when harnessed appropriately — is both real and powerful. But the ultimate success or failure of any business hinges much more on the market opportunity, productization, and the team’s ability to sell than it does on specific implementations of machine learning algorithms. Just as compelling tech is a necessary but insufficient condition to create a successful tech company, great tech in the absence of a viable business is unlikely to become anything more than a science project.

Note: Jason Black is an investor at RRE Ventures. The few RRE portfolio companies mentioned in this post are clearly denoted with an *asterisk. You can follow Jason on Twitter here.

Let's block ads! (Why?)

07 Dec 17:57

Mentirinhas #1075

by Fábio Coala

mentirinhas_1067e

Ah, Coala, não sei o que é “side quest”! – Se trabalhasse menos e jogasse mais videogame, saberia.

O post Mentirinhas #1075 apareceu primeiro em Mentirinhas.

07 Dec 17:57

Settling

Of course, "Number of times I've gotten to make a decision twice to know for sure how it would have turned out" is still at 0.
04 Dec 19:52

Ketogenic Diet | Epilepsy Foundation

by brandizzi
Adam Victor Brandizzi

Combatendo epilepsia ao limitar carboidratos. Isso é bem surpreendente! (via https://twitter.com/luhugerth/status/802110965588357120)

What is the ketogenic diet?

The ketogenic diet is a special high-fat, low-carbohydrate diet that helps to control seizures in some people with epilepsy. It is prescribed by a physician and carefully monitored by a dietitian. It is stricter than the modified Atkins diet, requiring careful measurements of calories, fluids, and proteins.

  • The name ketogenic means that it produces ketones in the body (keto = ketone, genic = producing). Ketones are formed when the body uses fat for its source of energy.
  • Usually the body uses carbohydrates (such as sugar, bread, pasta) for its fuel, but because the ketogenic diet is very low in carbohydrates, fats become the primary fuel instead.
  • Ketones are not dangerous. They can be detected in the urine, blood, and breath. Ketones are one of the more likely mechanisms of action of the diet; with higher ketone levels often leading to improved seizure control. However, there are many other theories for why the diet will work.

Who will it help?

  • Doctors usually recommend the ketogenic diet for children whose seizures have not responded to several different seizure medicines. It is particularly recommended for children with the Lennox-Gastaut syndrome.
  • The diet is usually not recommended for adults, mostly because the restricted food choices make it hard to follow. Yet, studies done on the use of the diet in adults show that it seems to work just as well.
  • The ketogenic diet has been shown in small studies (case reports and case series) to be particularly helpful for some epilepsy conditions. These include infantile spasms, Rett syndrome, tuberous sclerosis complex, Dravet syndrome, Doose syndrome, and GLUT-1 deficiency. Using a formula-only ketogenic diet for infants and gastrostomy-tube fed children may lead to better compliance and possibly even improved efficacy.
  • The diet works well for children with focal seizures, but may be less likely to lead to an immediate seizure-free result.
  • In general, the diet can always be considered as long as there are no clear metabolic or mitochondrial reasons not to use it.

What is it like?

  • The typical ketogenic diet, called the "long-chain triglyceride diet," provides 3 to 4 grams of fat for every 1 gram of carbohydrate and protein.
  • The dietician recommends a daily diet that contains 75 to 100 calories for every kilogram (2.2 pounds) of body weight and 1-2 grams of protein for every kilogram of body weight. If this sounds complicated, it is! That's why parents need a dietician's help.
  • A ketogenic diet “ratio” is the ratio of fat to carbohydrate and protein grams combined. A 4:1 ratio is more strict than a 3:1 ratio, and is typically used for most children. A 3:1 ratio is typically used for infants, adolescents, and children who require higher amounts of protein or carbohydrate for some other reason.
  • The kinds of foods that provide fat for the ketogenic diet are butter, heavy whipping cream, mayonnaise, and oils (e.g. canola or olive).
  • Because the amount of carbohydrate and protein in the diet have to be restricted, it is very important to prepare meals carefully.
  • No other sources of carbohydrates can be eaten. (Even toothpaste might have some sugar in it!).
  • The ketogenic diet is supervised by a dietician who monitors the child's nutrition and can teach parents and the child what can and cannot be eaten.

What happens first?

  • Typically the diet is started in the hospital. The child usually begins by fasting (except for water) under close medical supervision for 24 hours. For instance, the child might go into the hospital on Monday, start fasting at 6 p.m. and continue to have only water until 6 a.m. on Tuesday. The diet is then started, either by slowly increasing the calories or the ratio. This is the typical Hopkins protocol.
  • There is growing evidence that fasting is probably not necessary for long-term efficacy, although it does lead to a quicker onset of ketosis.
  • The primary reason for admission in most centers is to monitor for any increase in seizures on the diet, ensure all medications are carbohydrate-free, and educate the families.

Does it work?

Several studies have shown that the ketogenic diet does reduce or prevent seizures in many children whose seizures could not be controlled by medications.

  • Over half of children who go on the diet have at least a 50% reduction in the number of their seizures.
  • Some children, usually 10-15%, even become seizure-free.

Tell me more

  • Children who are on the ketogenic diet continue to take seizure medicines.
  • Some are able to take smaller doses or fewer medicines than before they started the diet.
  • When medications can be lowered depends on the child and the comfort level of the neurologist. Evidence suggests it can be done safely in come children - as soon as the diet is started. 
  • If the person goes off the diet for even one meal, it may lose its good effect. So it is very important to stick with the diet as prescribed.
  • It can be hard to follow the diet 100%, especially if there are other children at home who are on a normal diet.
  • Small children who have free access to the refrigerator are tempted by "forbidden" foods. Parents need to work as closely as possible with a dietician.

Are there any side effects?

  • A person starting the ketogenic diet may feel sluggish for a few days after the diet is started. This can worsen if a child is sick at the same time as the diet is started.
  • Make sure to encourage carbohydrate-free fluids during illnesses.
  • Other side effects that might occur if the person stays on the diet for a long time are:
    • Kidney stones
    • High cholesterol levels in the blood
    • Dehydration
    • Constipation
    • Slowed growth or weight gain
    • Bone fractures

Are any other medicine changes needed?

  • Because the diet does not provide all the vitamins and minerals found in a balanced diet, the dietician will recommend vitamin and mineral supplements. The most important of these are calcium and vitamin D (to prevent thinning of the bones), iron, and folic acid.
  • There are no anticonvulsants that should be stopped while on the diet. Topamax (topiramate) and Zonegran (zonisamide) do not have a higher risk of acidosis or kidney stones while on the diet. Depakote (valproic acid) does not lead to carnitine deficiency or other difficulties while on the diet either.
  • Medication levels do not change while on the diet according to recent studies.

How is the patient monitored over time?

  • Early on, the doctor will usually see the child every 1-3 months.
  • Blood and urine tests are performed to make sure there are no medical problems.
  • The height and weight are measured to see if growth has slowed down.
  • As the child gains weight, the diet may need to be adjusted by the dietician.

Can the diet ever be stopped?

If seizures have been well controlled for some time, usually 2 years, the doctor might suggest going off the diet.

  • Usually, the patient is gradually taken off the diet over several months or even longer. Seizures may worsen if the ketogenic diet is stopped all at once.
  • Children usually continue to take seizure medicines after they go off the diet.
  • In many situations, the diet has led to significant, but not total, seizure control. Families may choose to remain on the ketogenic diet for many years in these situations.

Where can I find out more information about the diet?

Other than the internet, there are several books about the ketogenic diet available. 

  • One is The Ketogenic Diet: A Treatment for Children and Others with Epilepsy, by Drs. Freeman and Kossoff, which discusses the Johns Hopkins approach and experience. 
  • The Charlie Foundation and Matthew’s Friends are parent-run organizations for support. 

Let's block ads! (Why?)

04 Dec 19:51

Enzymes from nine organisms combined to create new pathway to use CO2 | Ars Technica

by brandizzi
Adam Victor Brandizzi

Não muito útil agora mas certamente algo muito interessante!

Enlarge / Yeah, it's nice, and there's a lot of it, but it's horribly inefficient.

There's no question that humans are driving long-term changes in the amount of carbon in the atmosphere. But the human influence is taking place against a backdrop of natural carbon fluxes that are staggering in scale. Each year, for example, the amount of CO2 in the atmosphere cycles up and down by over a percent purely due to seasonal differences in plant growth.

The effectiveness of biological activity provides the hope that we could leverage it to help us pull some of our carbon back out of the atmosphere at an accelerated pace. But the incredible scale of biology hides a bit of an ugly secret: the individual enzymes and pathways that are used to incorporate CO2 into living organisms aren't that efficient. These pathways are also linked to a complex biochemistry inside the cell that doesn't always suit our purposes.

Fed up with waiting for life to evolve a solution to our industrialization problem, a German-Swiss team of researchers has decided to roll its own. In an astonishing bit of work, they've taken enzymes from nine different organisms in all three domains of life and used them to build and optimize a synthetic cycle that can use carbon dioxide with an efficiency 20 times that of the system used by plants.

Breaking up CO2

The problem with carbon dioxide is that it's a very stable molecule. It requires a fair bit of energy to break it down, but unless we can figure out how to break it down efficiently, we can't use atmospheric CO2 for any of the many things we use carbon for, like the polymers in our plastics or the graphite in our electrodes. While various ideas have been floated for incorporating atmospheric CO2 into usable chemicals, none of them has managed to scale economically yet.

Living organisms, however, do this trick all the time. More than 90 percent of the carbon removed from the atmosphere ends up being made into sugars by photosynthetic organisms, and there are at least five other minor pathways through which organisms build complex molecules starting from CO2. All of these processes have issues when it comes to how we might want to use them. Many of them are relatively inefficient; others will only work in environmental conditions that are inconvenient; all of them are plugged into a complex cellular biochemistry that often results in lots of side products or a final product that's not easy to turn into something useful.

All of those annoying features are what you might expect from evolution, which is tuning the carbon reactions for the environments and needs of specific organisms. So, the team behind the new work decided to do what evolution hasn't: bring together enzymes from organisms that would never come in contact with each other and build a pathway that's designed for efficient use of CO2.

To do so, they started by focusing on the limiting enzyme in most known pathways: the one that breaks down CO2 in the first place. The team searched the databases for all enzymes belonging to this class and identified ones that had the properties they were looking for. They settled on a group of enzymes called enoyl-CoA carboxylases/reductases, or ECRs.

ECRs were only discovered fairly recently, and they typically aren't even the main route for obtaining carbon in the organisms that have them. But for the purposes here, ECRs have a lot of good properties: they're highly efficient, don't undergo side-reactions with oxygen, and don't require any unusual chemicals to make the reaction work.

Building a pathway

But the reaction that ECRs catalyze is only the first step, and it would require a constant feed of chemicals to react the CO2 with. Most organisms obtain carbon dioxide as part of a cycle. They get it to react with a larger chemical, then break off a smaller carbon-containing molecule, and then use a few further reactions to re-form the original chemical. (You can see an example of this in a Calvin Cycle diagram.) So, the team decided to build an entire cycle that incorporates the ECR enzyme.

Rather than adapt an existing cycle, the researchers started from scratch, building hypothetical pathways that use biologically plausible molecules and then evaluating them for energy efficiency. Only once a cycle was identified did they search databases to find out whether any enzymes existed that could catalyze the reaction. They ended up with a 13-step cycle that incorporated CO2 at two different steps and ended by combining the two resulting carbons with acetic acid to form a four-carbon molecule called malic acid. A number of chemical co-factors and energy in the form of ATP would need to be added along the way, but on paper, it all worked out.

And that's when the real work began.

In total, 12 of those 13 steps required a distinct enzyme to work, so the authors had to obtain the genes for all of these, make proteins, and then purify them. Once they had that, the team showed that adding the enzymes for each step ended up producing the products expected. Once all the enzymes were added, the expected end product (malic acid) was produced.

This process let researchers identify any inefficiencies in the process. For example, things tended to bog down at step 10 of the cycle, leading to the accumulation of the chemical produced by step nine. So, they looked at the enzyme involved and determined the reaction would be more efficient if it used oxygen instead of the chemical it typically required. The team looked at the structure of the enzyme and redesigned it to use oxygen. It worked.

They kept tweaking the pathway. The overall design was replaced with one that used a somewhat different reaction pathway. Some of the enzymes ended up spitting out a bunch of side products that were unusable dead-ends; those were engineered to stop this. In other cases, new enzymes were added to do what the researchers call "proofreading"—when a dead-end side product was made, they converted it back to a useful one.

The new cycle in all its glory. Note that the same enzyme uses carbon dioxide at two points in the pathway, meaning each turn of the cycle uses two molecules of the gas.
Enlarge / The new cycle in all its glory. Note that the same enzyme uses carbon dioxide at two points in the pathway, meaning each turn of the cycle uses two molecules of the gas.

By the time the team was done, the system used 17 different enzymes from nine different organisms, including bacteria, archaea, plants, and humans. The final system was truly impressive, using carbon dioxide with an efficiency 20 times that of the system used in photosynthesis.

The big picture

Take a moment to appreciate the scale of this accomplishment. In four billion years of evolution, life has only managed to evolve six known pathways that start with carbon dioxide and build more complex molecules. In just a few years, a bunch of grad students in Zurich added a seventh.

There are some pretty obvious limitations to this system as it now stands. A variety of biochemical co-factors need to be added to the reaction to get it to work, and the output—malic acid—is currently only used as a food additive. But malic acid undergoes a variety of reactions within cells, and there's no reason to think that some of these couldn't direct it into a useful industrial chemical. Or, there's no reason to believe we couldn't find other ways of using malic acid if there was suddenly a surplus of it.

The other thing is that the entire pathway can now be put inside cells, either normal bacteria like E. coli or the synthetic cells with a minimal genome that researchers are working on. If that's the case, then the need to supply all the chemical co-factors should go away, since the cells should be producing them anyway. More importantly, if the cell is made to depend on this pathway as its only source of carbon, evolution would have the chance to optimize it even further.

The paper also comes at an interesting time. International climate negotiations are taking place as nations start to grapple with the fact that the Paris Agreement isn't sufficient to keep the planet under the goal of 2 degrees Celsius warming. The US has submitted its plans for the mid-century, which include extensive use of carbon capture and storage to make its energy system carbon neutral. Even then, however, it's likely that we'll need to pull carbon directly out of the atmosphere before this century is out to limit warming.

Something like this, which could make atmospheric carbon into an industrial feedstock, might be essential to enabling that future. The same goes for a separate paper in the same issue of Science that describes re-engineering trees to get them to photosynthesize more efficiently under variable light conditions. We're probably going to need some sort of technology like this, so it's nice to see the fundamental science that could enable it getting done.

Science, 2016. DOI: 10.1126/science.aah5237  (About DOIs).

Updated to clarify the need for external energy supply.

Let's block ads! (Why?)

04 Dec 19:08

“Son otra herramienta de mercadeo”: incineran en Londres grandes recuerdos del punk

Adam Victor Brandizzi

Meus herois.

El hijo del representante de los Sex Pistols, Malcolm McLaren, y de la diseñadora Vivienne Westwood prendió fuego a un conjunto de valiosos recuerdos punk el sábado, como protesta contra los planes de celebración del 40º aniversario del movimiento.

Joe Corre quemó los artículos, cuyo valor se calcula en 5 millones de libras (6,2 millones de dólares, 5,9 millones de euros) en un barco en el río Támesis, en Londres. 

"El punk nunca, nunca tuvo por finalidad ser nostálgico, y tú no puedes aprender cómo ser uno en un taller del Museo de Londres", dijo a los observadores. 

"El punk se ha convertido en otra herramienta de mercadeo para venderte algo que no necesitas. La ilusión de una elección alternativa. Conformidad con otro uniforme". 

El conjunto de recuerdos quedó reducido a humo, junto con fuegos artificiales y efigies de líderes políticos.  

Anteriormente, Corre había dicho que le disgustaban los planes de Londres de celebrar los 40 años de la subcultura punk. 

El programa, que incluye actos, conciertos y exposiciones, está respaldado por el alcalde de Londres, la Biblioteca Británica y el Instituto del Cine Británico, entre otros. 

Corre afirmó que quería destacar "la hipocresía que hay en el corazón de este robo de 40 años de 'Anarquía en Reino Unido'", el icónico sencillo de los Sex Pistols, lanzado el 26 de noviembre de 1976.

"El 'establishment' ha decidido que es el momento de celebrarlo. Está tratando de privatizarlo, de empaquetarlo, de castrarlo", declaró Corre, citado por el periódico The Times. "Es el momento de prenderle fuego a todo esto". 

Un barco del servicio de bomberos ayudó a extinguir las llamas. 

04 Dec 19:08

What am I doing?

by Cale

Sitting here and typing this out, I'm wondering what my life is about. You see, I studied to be one thing, and I've done it alright. But I'm still not sure if I picked the right fight. I'm thinking of making a change, and doing something new, something novel, and something true. It's just that there's an emptiness in my gut, and I'm not sure if this job'll make the cut. So I decided to quit and find that other path, even under the nose of my wife's probable wrath. But would you believe what I found in the new work? I found others fed up, you could tell by the smirk. So I learned that just as one man's junk is another's treasure, it seems that one man's shitty career is another's pleasure.

Just the other day my friend talked my ear off about how he's not happy. It sounded a bit rough, like an old dead tree that's sappy. He asked himself what the hell he's doing with his life? All that I wondered was whether I'd be lunching with my wife?

The post What am I doing? appeared first on Things in Squares.

04 Dec 19:06

ironoverwine: theinturnetexplorer: Being a nature photographer...





















ironoverwine:

theinturnetexplorer:

Being a nature photographer seems great, maybe I should try…

04 Dec 19:04

An ‘Infinite’ Galaxy Puzzle That Can Be Built in Any Direction

by Christopher Jobson

puzzle-1

The team over at Nervous System recently designed this fun Infinite Galaxy Puzzle that tiles continuously in any direction. Pieces from the top can be removed and added to the bottom, and likewise from side to side. So regardless of where you start the puzzle can continue in a seemingly infinite series of patterns. Each puzzle is printed with satellite imagery obtained from NASA and includes a few themed pieces like an astronaut, shuttle, and satellite. Apparently the puzzles were wildly popular and are now available as a pre-order for 2017. (via My Modern Met)

puzzle-2

puzzle-3

puzzle-4

puzzle-4

puzzle-5

04 Dec 19:03

Saturday Morning Breakfast Cereal - A Better Family

by tech@thehiveworks.com


Hovertext:
But seriously, if you know anyone who could deliver that, I still have a few days to live.

New comic!
Today's News:
04 Dec 19:02

Comic for December 03, 2016

by Scott Adams
04 Dec 19:02

Toilet Paper Bill

by delfrig

toilet-paper-bill

Facebooktwittergoogle_plusredditpinterestlinkedinmail
04 Dec 19:01

Play Date

by Brian

play-date

Bonus Panel

The post Play Date appeared first on Fowl Language Comics.

04 Dec 19:00

Photo











04 Dec 19:00

Comic for 2016.12.02

by Dave McElfatrick
04 Dec 18:59

Whomp! - Squeezy Does It

by tech@thehiveworks.com

New comic!

Today's News:
04 Dec 18:57

Photo



04 Dec 18:56

11/30/16 PHD comic: 'Academic Apps'

Piled Higher & Deeper by Jorge Cham
www.phdcomics.com
Click on the title below to read the comic
title: "Academic Apps" - originally published 11/30/2016

For the latest news in PHD Comics, CLICK HERE!

04 Dec 18:54

Humor

by itsthetie

nihilism-copy-1

bonus

04 Dec 18:53

Ghost Business

by Reza

ghost-business

04 Dec 18:53

Saturday Morning Breakfast Cereal - Black Swan

by tech@thehiveworks.com


Hovertext:
Too soon?

New comic!
Today's News:

Wednesday Book Reviews!

 

A Cartoon History of the Universe (book 2) (Gonick) I decided I’m gonna plow through these. This one was as good as the first, but with the same (in my opinion) tendency to sometimes rely entirely on myth for parts of the story. To Gonick’s credit, he tends to point out when he does this, but to me it makes the stories less enjoyable, insofar as they’re presented as history. Still, quite good, and I feel like I’m learning a lot from his art style.

The Hidden Life of Trees (Wohlleben) I really enjoyed this book. Wohlleben works in forest management, and has written a wonderful book on all the weird ways in which trees adapt to their environments and communicate with each other (using chemical signals, electric signals, etc.). It contains a ton of strange info - for example, apparently some bug-infested trees will chemically signal parasitoids to come eat the bugs that are harming the tree. The author also claims that old trees are more disease resistant because they can communicate with each other about what pathogens have entered the area. Wohlleben occasionally gets a little sappy and mystical about forestry, but all of his serious claims are either backed by scientific evidence or have a disclaimer that they’re just something he suspects is true.

The Utopia of Rules (Graeber) Dammit, Graeber. Every time I wanted to hate this book, he had something really insightful to say. This is my second time reading a Graeber collection, and this one is very similar. There are big, interesting, sweeping thoughts about how humanity and society work. I kinda like this - it’s a sort of throwback to the way people sometimes wrote in the 19th century, trying to grandly analyze The Whole Thing. On the other hand, as with those writers, Graeber often makes statements that are simply wrong.

For instance, he has a whole theory on why superhero comics are the most popular. It comes from an anthropological perspective, which is interesting, but completely neglects the fact that (as any comics dork can tell you) non-Superhero comic genres basically got killed off in the mid-50s by the Comics Code Authority. It’s possible the theory could be salvaged, but it’d have to bear the weight of that weird turn in history. And yet… he’s got so much insight, you find yourself wanting his advice then wanting to scream at him. It’s like a conversation with a brilliant polymath who doesn’t quite have every little fact straight, but who nevertheless is absolutely delightful.

One particular bit really stuck with me: Graeber described the idea that in modern life, people have ideas but then don’t pursue them because they find something vaguely similar on Google. This is obvious, but Graeber’s theory is that this effect may hold back progress more than we think. I’ve certainly observed other cartoonists doing this, whereas my personal rule is to never check google after I have an idea. It’s a waste of time, and it benefits no. A bit later (see next week’s book reviews) I happened to read Tom Standage’s book on the telegraph, in which an important occurrence was that Samuel Morse had no clue other people had tried and failed to make a long distance telegraph. I can’t help but wondering if our incredible connectivity today has more subtle negative consequences than we typically consider.

The Man Who Knew Infinity (Kanigel) A great biography of Ramanujan, with the one caveat (for the potential buyer) that, well… from the perspective of storytelling, Ramanujan’s life just wasn’t that exciting. Of course, as a mathematician (in ways I’m sure I don’t understand) he was one of the most incredible in history. But, perhaps for that reason, his life consists of a lot of sitting around, having abstruse discussions, and making poor dietary choices. It’s a very good biography, but it can’t help but feel a bit tedious here and there, when describing minor flaps between Ramanujan and his relatives, for instance. This sort of thing is made doubly tiresome by the fact that it seems we often don’t actually know the full nature of this or that disagreement, because Ramanujan is treated almost like a God by those who knew him. Still, quite good, and if you want to know about Ramanujan, this is probably the book!

Demerit: Kanigel repeats an incorrect etymology of the word “posh” in which it purportedly is a sea acronym for Port Outward Starboard Home. This is known to be false.

04 Dec 18:52

Vacation

by Brian

vacation

Bonus Panel

The post Vacation appeared first on Fowl Language Comics.

04 Dec 18:49

Adventures of the Ampersand

by Grant

This comic appears in the latest issue of The Southampton Review.

Posters are available at my shop.
You can now pre-order my book, The Shape of Ideas.
04 Dec 18:44

Saying Things

by Reza

saying-things

04 Dec 18:43

Baby Post

[bzzzt] "REMEMBER TO CHECK IN FOR YOUR FLIGHT TO LONDON." "My wha-" [bzzzt] "YOUR UBER WILL ARRIVE IN FOUR MINUTES."
04 Dec 18:42

Anésia # 314

by Will Tirando

anesia-palmeiras-campeao

04 Dec 18:26

Comic for December 04, 2016

by Scott Adams