Shared posts

03 Apr 04:16

Easy Turkey and Brussels Sprout Quesadillas

by J. Kenji López-Alt

[Photographs: J. Kenji Lopez-Alt]

Turkey and Brussels sprouts are a natural pair that should be brought out more than once a year. In this recipe, we combine them with gooey melted cheese and pickled jalapeños in a light and crispy quesadilla.

Why this recipe works:

  • Charring the Brussels sprouts gives them a nutty sweetness that complements the roast turkey.
  • Mixing all of your ingredients before stuffing the tortillas helps the melted cheese bind every bite.
  • The key to great, puffy, crisp quesadillas is moderate heat and enough oil to brown each side evenly.
  • We season each side of the quesadilla with salt as soon as it is cooked; The heat helps the salt stick in place.

About the author: J. Kenji Lopez-Alt is the Chief Creative Officer of Serious Eats where he likes to explore the science of home cooking in his weekly column The Food Lab. You can follow him at @thefoodlab on Twitter, or at The Food Lab on Facebook.

Ingredients

serves Makes 2 quesadillas, active time 20 minutes, total time 20 minutes

  • 1/4 cup vegetable or canola oil, divided
  • 1 1/2 cups finely shredded Brussels sprouts (about 6 ounces)
  • Kosher salt and freshly ground black pepper
  • 4 ounces (about 1 cup) roughly chopped or torn roast turkey
  • 2 pickled jalapeños, finely minced
  • 4 ounces (about 1 cup) grated Jack, Cheddar, or Oaxacan string cheese
  • 2 (10-inch) flour tortillas
  • 1/2 cup homemade or store-bought salsa verde

Procedures

  1. Heat 1 tablespoon oil in a medium cast iron or stainless steel skillet over high heat until lightly smoking. Add Brussels sprouts, season with salt and pepper, and cook, tossing and stirring occasionally, until wilted and lightly charred, about 2 minutes. Transfer to bowl and wipe out skillet.

  2. Allow Brussels sprouts to cool slightly, then add turkey, jalapeños, and cheese. Toss with hands until thoroughly combined.

  3. Spread half of cheese mixture over one half of one tortilla, leaving a small border around the edge. Fold tortilla firmly in half to enclose the cheese. Repeat with remaining tortilla.

  4. Heat remaining oil in the skillet over medium heat until shimmering. Carefully add both folded tortillas to skillet and cook, shaking pan gently until first side is golden brown and puffed, 1 to 2 minutes. Carefully fip tortillas with a flexible slotted spatula, sprinkle with salt, and cook on second side until golden brown and puffed, 1 to 2 minutes longer. Transfer to a paper towel-lined plate and allow to rest 1 minutes. Cut each into four pieces and serve with salsa verde.

06 Jan 00:09

Half in the Bag: The Wolf of Wall Street and 2013 Re-cap

by admin

As with previous years, Mike and Jay close out 2013 by seeing an actual good movie by a well-regarded director. This year it’s Martin Scorsese’s crime epic The Wolf of Wall Street. Meanwhile, Mr. Plinkett eats a 37 double bacon cheeseburgers because hijinks.


08 Oct 00:46

I Just Want the Damn Pipette!

I Just Want the Damn Pipette!

Submitted by: Unknown

27 Sep 02:05

Breaking Bad vs The Joker! – Make Me Draw

by Mike Matei
Make Me Draw! Episode 2! Get the print! http://sharkrobot.com/make-me-draw It’s a Hydrochloric acid battle to the death between Walter White (Breaking Bad) and The Joker! Want Mike Matei to draw you a VS battle? Leave a comment below, and if chosen, your suggestion could be next! Winner for this video https://www.facebook.com/cmvd101 Thanks to Casey and Jordan for coloring
12 Sep 20:53

Skillet Spaghetti alla Carbonara with Kale

by Yasmin Fahr

We cook our pasta directly in the skillet and flavor it with bacon, black pepper, parmesan cheese, and kale, all in a creamy, egg-based sauce. [Photographs: Yasmin Fahr]

Note: Season cautiously as both the bacon and cheese will have a substantial amount of salt, but make sure to go heavy on the fresh cracked pepper! Having a little liquid left in the pan when you add the mixture isn't a bad thing as it can help loosen up the sauce. If you do find that the sauce is a little thick, you can add a little bit of hot water or broth to fix it.

About the Author: Yasmin Fahr is a food lover, writer, and cook. Follow her @yasminfahr for more updates on her eating adventures and discoveries, which will most likely include tomatoes. And probably feta. Happy eating!

Every recipe we publish is tested, tasted, and Serious Eats-approved by our staff. Never miss a recipe again by following @SeriousRecipes on Twitter!

Ingredients

serves Serves 4, active time 30 minutes, total time 30 minutes

  • 8 ounces bacon, pancetta, or guanciale, cut into 1/2-inch pieces
  • 1 shallot, thinly sliced (about 1/4 cup)
  • 3 cups chopped curly kale, stems discarded, leaves cut into 2-inch ribbons
  • Kosher salt and freshly ground black pepper
  • 3 ½ cups homemade chicken stock or store-bought low-sodium chicken broth
  • 1 pound spaghetti
  • 4 eggs
  • 1 cup freshly grated Parmigiano-Reggiano, plus more for finishing

Procedures

  1. Heat bacon in a 12-inch skillet over medium heat and cook, stirring occasionally, until most of the fat is rendered and begins to crisp, about 5 minutes. Add the shallot and cook until fragrant and lightly softened, about 1 minute. Add the kale and cook, stirring, until the kale cooks down and begins to crisp, 3 to 4 minutes. Season to taste with salt and pepper. Transfer kale, bacon, and shallots to a bowl and set aside.

  2. In the same pot, add the broth and pasta, adjust the heat to maintain a vigorous boil, and cook according to the package directions until al dente, stirring occasionally to ensure nothing sticks. While the pasta is almost done cooking, add the raw eggs, cheese and black pepper to the kale and bacon mixture. When the pasta is finished, remove from the heat, add the kale mixture to the pan, and vigorously stir with a wooden spoon until thickened and creamy. Season strongly with black pepper and serve immediately with more cheese and black pepper on the side.

04 Sep 01:35

The *SEM 2013 Panel on Language Understanding (aka semantics)

by hal
One of the highlights for me at NAACL was the *SEM panel on "Toward Deep NLU", which had the following speakers: Kevin Knight (USC/ISI), Chris Manning (Stanford), Martha Palmer (UC Boulder), Owen Rambow (Columbia) and Dan Roth (UIUC). I want to give a bit of an overview the panel, interspersed with some opinion. I gratefully acknowledge my wonderful colleague Bonnie Dorr for taking great notes (basically a transcript) and sharing them with me to help my failing memory. For what it's worth, this basically seemed like the "here's what I'm doing for DEFT panel" :).

Here's the basic gist that I got from each of the panel members, who gave roughly 10 minute talks:

Dan Roth: doing role labeling restricted to verbs is not enough. As an easy example, "John, a fast-rising politician, slept on the train to Chicago"... by normal SRL we get that John is sleeping, but not the possibly more important fact that John is a politician. Another example is prepositions: "University of Illinois" versus "State of Illinois" -- "of" is ambiguous. They came up with a taxonomy of 32 relations and labeled data and then did some learning -- see the TACL paper that was presented at NAACL, Srikumar & Roth.

Commentary: the ambiguity of prepositions issue is cool and I really liked the TACL paper. It reminds me of learning Latin in high school and being confused that ablative case markers were abiguous across from/by/with. It astounded me that that was an acceptable ambiguity, but of course English has equally crazy ones that I've just gotten used to. But it does make me think that some cross-linguistic study/model might be cool here. Even more broadly, it made me think about noun-noun compound semantics: "farmers market" (market put on by farmers) versus "fruit market" (market where you buy fruit) versus "fruit pie" (pie made out of fruit). I went back and read Lucy Vanderwende's dissertation, which dealt exactly with these issues. She had far fewer relations than Srikumar and Roth, though perhaps once you allow explicit prepositions the range of things you can express grows (though somehow my gut feeling is that it doesn't, at least in English).

Kevin Knight: Basically talked about their deep semantic approach to MT: see the abstract meaning representation web page for more. The idea is that people who work on syntax don't Balkanize into those who do PPs, those who do VPs, etc., so why should semantics break apart like it does. AMR is very GOFAI-style representation for language, and they've annotated a Chinese-English bilingual copy of Le Petite Prince with this representation. Now they need analyzers (hard), generators (hard) and transformation formalisms (hard). The nice thing is that this one representation captures almost all relevant semantic issues: scoping, argument structure, coreference, etc. For instance, co-ref is not explicitly annotated: it's just that a single agent can participate in multiple predicates. (Note: not yet across sentences.)

Commentary: It's hard not to get excited about this stuff, especially when Kevin talks about it. His enthusiasm is infectious. I left the talk thinking "wow I want to work on that!" There's of course the worry that we've tried this before and failed and that's why things in semantics Balkanized, but maybe the time is right to revisit it. For instance, Bonnie herself (note: she didn't tell me this; it had come up in recent discussions with Philip Resnik and our postdoc Junhui Li) had a meaning representation very similar to AMR called Lexical Conceptual Structures (LCS), and Nizar Habash had a hybrid rule-based/statistical approach to translating there. The idea was that if you want to handle divergent translations (classic example: "the bottle floated across the river" (English) versus "the bottle crossed the river floatingly" (Spanish, I think)), you need a representation that abstracts means from predicate. But it's still very cool. (Actually in digging up refs, I just found this paper on mapping from LCS to AMR... from AMTA 1998!)

Martha Palmer: focused mostly on event relations that go across sentences, which includes things like even coreference, bridging relations (enablement, result) and so on. They're also looking seriously at type (evidential, aspectual, etc.), modality (actual, hypothetical, generic, hedged, etc.), polarity and aspect. They are currently doing a lot of work in the clinical domain, in which these distinctions are really important if you want to understand, say, patient medical histories.

Commentary: this is a bit outside things I usually think about, so I have less to say. I really like the hyper-sentence view, of course.

Owen Rambow: talked about some of my favorite work that I've seen recently: basically work on propositional attitudes. The view Owen put forth is that most of NLP is focused on a world of facts, and the goal of NLU is to figure out what these facts are. They are taking a much more social model of text meaning, in which you really care about inferring partipants' cognitive states (standard triumvirate: belief, desire and intention). This actually shows up in at least one English-German translation example, essentially in which Google translate misses a very important subjunctive.

Commentary: I really liked the original work Owen did on BDI inference and I'm thrilled it's going further. I think one of the historical reasons why I find this so interesting is that propositional attitudes is basically what I started doing when I started grad school, when looking at discourse analysis through RST. I think many people forget this, but the discourse relationships in RST (and other discourse theories) are really based on attitude. For instance, X is in a background relation to Y if (roughly) the listener already believes X and the listener also believes that X increases the chance of Y. (Or something like that: I just made that up :P.) But it's all about belief of listeners and utterers.

Chris Manning: focused on deep learning, basically asserting (in a manner designed to be a bit controversial) that Stanford dependencies are their meaning representation and that the big problems aren't in representations. Sure, Stanford dependencies miss out on a lot (quanitification, tense, semantic roles, modality, etc.) but he felt that there are more important problems to address. And then what we need instead is "soft" meaning representations, like vector space models and distributed representations give us. Giving rise to something akin to Natural Logic.

Commentary: to a large degree I agree with the notion that the "big problems" in language are probably not those that (eg) semanticists like to look at, at least from the typical view of NLE in which we want systems that do well on average across a distribution of examples that we've cultivated. But I also worry that there's a bit of magical thinking here, in the sense that it kind of feels like a cop-out: it's too hard to define categories by hand so let's let the machine figure it out. Now, don't get me wrong, I'm all for machines figuring out stuff (I gave a not-very-well-received talk to that degree at a workshop a couple years ago on linguistics in NLP), but I'm also a bit reticent to believe that this is really going to bring us any closer to really solving the NLU problem (whatever that is), though of course it will get us another 5-10% in standard benchmarks. (Ok this sounds way too negative: I actually really liked Chris' talk, and one of the things I liked about it was that it challenged my thinking. And I agree that there is a lot that we shouldn't be designing by hand -- some people, like Yoshua Bengio, would probably argue that we shouldn't be designing anything by hand, or at least that we shouldn't have to -- but I guess I still belong to the camp of "linguists give the structure, statistics gives the parameters.")

There was also a lot of really interesting discussion after the presentations, some of which I'll highlight below:

Lucy Vanderwende, I think mostly directed at Kevin, fell into the "we tried this X years ago" camp, basically said that whenever they tried to abstract more and more from the input representation, you ended up getting very boring sentences generated because you'd thrown out all the "nuance" (my word, not hers). The discussion afterward basically revolved about whether you annotate input sentences with meaning (which is currently the standard) or throw them out with the bathwater. Owen points out that the meaning of a passive sentence is not +passive but something much more nuanced, and if you could capture that correctly, then (in principle) generators could reflect it properly in the target language. (Me: For instance, maybe in some wacky language a passive sentence actually means that you're trying to emphasize the subject.)

There was also a lot of discussion around Chris, I think partially because he went last and partially because he was trying to be controversial. Mausam made an argument (akin to what I wrote above) that logicians have made a billion logics of language and nothing really has worked (in a sense it's been a series of negative results). What about inference rules or consistency?

Okay, that's all I want to write for now. Congrats if you made it this far. And thanks to the *SEM organizers for putting together this great panel!
15 Apr 09:13

Watch what it takes to make a pair of hot fitting tongs —...

by rion


Watch what it takes to make a pair of hot fitting tongs — used in horseshoeing — in the limit of 45 minutes at the 2011 World Championship Blacksmiths’ Competition.

via Viral Viral Videos.