Shared posts

28 Apr 08:44

Marketing sportif : le calendrier de l’avent du PSG

by Anastasia Babatzikis

Si vous cherchez des exemples réussis de marketing sportif, le PSG est une excellente source d’inspiration. Devenu une véritable marque internationale, le PSG est suivi par des dizaines de millions de fans sur ses différents canaux. Pour animer ses audiences, le club utilise régulièrement des contenus interactifs comme levier d’animation et de marketing relationnel.

Dans cet exemple, découvrez le fonctionnement et les résultats d’une de ses campagnes les plus virales : la campagne « Calendrier de l’Avent ». Cette campagne, créée avec Qualifio, a été utilisée (avec succès) par le PSG pour fidéliser ses audiences et collecter des opt-ins.

marketing sportif psg

Le contenu interactif dans le marketing du sport

Le sport est généralement un sujet très populaire pour vos campagnes, et donc un driver de viralité. Que vous soyez vous-mêmes une marque sportive ou non, vous pouvez faire du marketing sportif. Par exemple, capitaliser sur des temps forts comme la Coupe du Monde, pour créer d’avantage d’engagement avec vos audiences et les fidéliser.

Encore faut-il choisir les bons formats ! Les formats interactifs, par opposition aux formats statiques, sont idéaux pour créer des conversations avec vos visiteurs autour de ces événements. Quiz, sondages, jeux-concours, tests de personnalité, team selector,… vous permettent d’engager vos visiteurs de manière ludique et les attirer de manière régulière sur vos canaux digitaux. Grâce aux formulaires généralement contenus dans ces campagnes, vous pouvez également collecter des données de segmentation. Celles-ci vous aideront à personnaliser vos communications pour améliorer votre ROI.

Dans le cas client ci-dessous, découvrez un cas particulier de marketing sportif, qui utilise le contenu interactif: le calendrier de l’avent du PSG.

Le calendrier de l’avent, un format idéal pour la fidélisation

Si l’objectif d’une de vos actions est la fidélisation, comme le PSG pour sa campagne, le format « calendrier de l’avent » est un format idéal. Chaque jour, donnez à vos participants une chance de gagner un nouveau prix. Ils seront ainsi encouragés à revenir quotidiennement.

Vous pouvez également utiliser une telle campagne comme moyen de collecter des données: nom, prénom, âge, mais aussi équipe de foot favorite, intérêt pour tel sport ou pour tel autre, etc. Chaque information pertinente est un levier de personnalisation, qui vous permettra d’individualiser vos communications marketing et de fidéliser davantage vos audiences.

L’article Marketing sportif : le calendrier de l’avent du PSG est apparu en premier sur Qualifio.

17 Mar 23:06

Intelligence collective et Big Data : définition, exemples en entreprise

by Bastien L

À l’ère du Big Data, l’intelligence collective des êtres humains permet de générer de nombreuses données pour résoudre certains des principaux problèmes de l’humanité. De même, elle permet d’analyser certaines données plus efficacement que les algorithmes informatiques. Découvrez la relation étroite entre intelligence collective et Big Data.

Intelligence collective : définition

La notion d’intelligence collective désigne une intelligence de groupe, ou une intelligence partagée émergeant de la collaboration, des efforts collectifs ou de la compétition entre plusieurs individus. Celle-ci permet de prendre des décisions par consensus. Les systèmes de vote, les réseaux sociaux, et les autres méthodes permettant de quantifier l’activité de masse peuvent être considérés comme des intelligences collectives.

Ce type d’intelligence apparaît comme une propriété émergente de la synergie entre le savoir offert par les données, les logiciels, le hardware informatique, et les experts de domaines spécifiques, permettant de prendre de meilleures décisions au moment opportun. Plus simplement, l’intelligence collective résulte de l’association entre les humains et les nouvelles façons de traiter l’information.

intelligence collective idée

Un concept très répandu

Le concept d’ intelligence collective dispose sociologie, en informatique, mais aussi dans le domaine du business. Pour Pierre Lévy, il s’agit d’une forme d’intelligence distribuée de façon universelle, qui s’améliore constamment, se coordonne en temps réel et découle sur une mobilisation efficace de compétences. Le fondement et l’objectif de cette forme d’intelligence sont la reconnaissance mutuelle et l’enrichissement des individus plutôt que le culte des communautés hypostasiées. Aux yeux de Pierre Lévy et Derrick de Kerckhove, elle désigne la capacité des réseaux de technologies informatiques à approfondir le bassin collectif de savoir social en étendant simultanément la portée des interactions humaines.

Elle contribue fortement à la transition du savoir et du pouvoir de l’individu vers le collectif. Selon Eric S.Raymond et JC Herz, l’intelligence Open Source permettra éventuellement de générer des résultats supérieurs au savoir généré par des logiciels propriétaires développés au sein d’entreprises. Pour Henry Jenkins, elle est une source alternative de pouvoir médiatique. Ce dernier critique notamment les écoles et les systèmes éducatifs, faisant la promotion de la résolution autonome de problèmes et de l’apprentissage individuel. Il reste cependant hostile à l’apprentissage via ce biais. Malgré tout, comme Pierre Lévy, il considère qu’elle est essentielle pour la démocratisation, car liée à la culture basée sur le savoir et alimentée par le partage d’idées. De fait, cette dernière contribue à une meilleure compréhension d’une société diverse.

Origine du concept d’ intelligence collective

intelligence collective fourmi

Le concept d’intelligence collective remonte à 1785. C’est à cette époque que le Marquis de Condorcet souligne que si chaque membre d’un groupe a de plus fortes probabilités de ne pas prendre une décision correcte, la probabilité pour que le vote majoritaire de ce groupe soit la décision correcte augmente avec le nombre de membres du groupe. Il s’agit du théorème du jury.

Un autre précurseur de ce concept est l’entomologiste William Morton Wheeler. Selon ses propos datés de 1911, des individus apparemment indépendants peuvent coopérer au point de devenir un organisme unique, une intelligence collective. Le scientifique a perçu ce processus collaboratif en observant les fourmis, agissant comme les cellules d’une entité individuelle.

En 1912, Émile Durkheim a identifié la société comme la seule source de pensée humaine logique. Selon lui, la société constitue une intelligence supérieure, car elle transcende l’individu dans l’espace et le temps. En 1962, Douglas Engelbart a établi le lien entre l’intelligence collective et l’efficacité d’une entreprise. Selon lui, trois personnes travaillant ensemble à la résolution d’un problème seront efficaces bien plus que trois fois par rapport à une seule personne.

Intelligence collective à l’ère du Big Data

intelligence collective données structurées

À l’ère du Big Data, de nombreuses entreprises ont tendance à chercher des réponses à leur question à l’endroit où elles sont faciles à chercher, plutôt qu’à l’endroit où il est probable de les trouver. En réalité, les probabilités pour qu’un groupe de recherche Big Data découvre une information utile dépend du type de données disponibles. Les données structurées, numériques, explicites et lisses seront plus facilement traitées par des ordinateurs, tandis que les données non structurées, analogues et ambigües, ont davantage de sens pour les cerveaux humains.

Toutefois, pour un humain comme pour un ordinateur, plus les ensembles de données sont conséquents, plus la puissance de calcul nécessaire est importante. Dans le cas des données structurées, des ordinateurs plus puissants feront l’affaire. En revanche, pour les données non structurées, il sera indispensable de s’en remettre à l’intelligence collective de plusieurs cerveaux humains.

Si l’objectif est de prédire le futur, une approche statistique du Big Data s’avère particulièrement bancale. En effet, les données disponibles s’enracinent nécessairement dans le passé. De fait, ces données peuvent certes permettre de prédire des situations similaires à celles du passé, par exemple pour une ligne de produits mature sur un marché stable, mais deviennent inutiles pour les prévisions liées à de nouveaux produits ou des marchés bouleversés.

Intelligence Collective : exemple d’applications

Voici quelques exemples de situations pour lesquelles les prédictions réalisées par l’intelligence collective s’avèrent plus utiles que les prévisions menées par le Big Data :

Marchés bouleversés : l’ intelligence collective en entreprise

Au milieu des années 2000, la demande mondiale en produits laitiers a soudainement été multipliée par trois en l’espace de quelques mois. Après une décennie de stabilité, les producteurs de produits laitiers ne pouvaient plus se baser sur les modèles prédictifs basés sur les données. Par conséquent, les acteurs de l’industrie se sont appuyés sur la collectivisation des agricultures, plus proches des consommateurs, pour mieux comprendre et modéliser les facteurs de cette nouvelle demande.

Nouveaux produits

Il y a quelques années, Lumenogic a collaboré avec une équipe de chercheurs en marketing pour réaliser une prédiction de marché au sein d’une entreprise du Fortune 100, en se focalisant sur de nouveaux produits. Cette méthode prédictive s’est révélée plus pertinente que les prévisions basées sur les données dans plus de 67% des cas, en réduisant la moyenne d’erreurs d’environ 15%, et en réduisant la portée des erreurs de 40%.

Élections politiques

intelligence collective en entreprise

Au cours des 20 dernières années, les marchés de prédiction sont devenus célèbres pour leur capacité à surpasser les sondages dans le domaine des prédictions des résultats électoraux. En novembre dernier, dans le cadre des élections présidentielles américaines, l’intelligence collective des traders de Hypermind a dépassé tous les modèles de prédictions statistiques data-driven mis en place par les géants du média. L’explication est simple. Elle est capable d’agréger de nombreuses informations non structurées sur ce qui rend chaque élection unique, à un niveau inaccessible pour les algorithmes statistiques, aussi sophistiqués soient-ils.

Malgré le flot actuel de données structurées, en pleine effervescence, il est essentiel de garder en tête que le monde regorge de données non structurées. Seul l’esprit humain y trouve un sens. Ainsi, si vous vous retrouvez à chercher des réponses en vain en explorant le Big Data, rappelez-vous qu’elle pourrait remédier au problème.

L’ intelligence collective et les Open Data pour comprendre les épidémies

intelligence collective smart city

Les chercheurs du Welcome Trust Sanger Institute et de l’Imperial College London ont développé Microreact. C’est une plateforme gratuite de visualisation et de suivi en temps réel des épidémies. Cet outil a notamment été utilisé pour surveiller les épidémies d’Ebola, de Zika, et de microbes résistant aux antibiotiques. L’équipe a collaboré avec la Microbiology Society pour permettre à tous les chercheurs du monde de partager leurs dernières informations au sujet d’épidémies.

Jusqu’à présent, les données et les informations géographiques sur les mouvements et évolutions des infections ou des maladies ont été cantonnées à des bases de données inaccessibles au public. Les chercheurs ont dû s’en remettre aux informations publiées dans des articles sur la recherche, parfois expirés. Ils contenaient uniquement des visuels statiques présentant une petite partie de la menace épidémique.

Système Microreact : faciliter le partage de données

Le système Microreact est basé sur le cloud, et combine le pouvoir des données ouvertes et de l’ intelligence collective du web pour proposer une visualisation et un partage de données mondiales en temps réel. Tout un chacun peut explorer et examiner les informations avec une vitesse et une précision inédite. Cet outil peut jouer un rôle primordial dans la surveillance et le contrôle des épidémies comme Zika ou Ebola.

Les données et les métadonnées se téléversent sur Microreact depuis un navigateur web. Elles peuvent ensuite être visualisées, partagées et publiées depuis un lien web permanent. Le partenariat avec Microbial Genomics permet au journal de créer des données à partir de publications prospectives. Ce projet promeut la disponibilité ouverte et l’accès aux données, tout en développant une ressource unique pour les professionnels de la santé et les scientifiques du monde entier.

Les travaux du Dr Kathryn Holt et du professeur Gordon Dougan illustrent à merveille la façon dont Microreact peut démocratiser les données génomiques et les insights qui en résultent. Ces derniers ont récemment publié deux articles sur la distribution mondiale de la bactérie typhoïde et la propagation épidémique résistante aux médicaments. Ils ont aussi publié leurs données directement sur Microreact pour aider d’autres chercheurs à développer d’autres travaux.

En publiant ainsi ces données sur Microreact, les chercheurs ont assuré la pérennité des données. Ils ont permis à d’autres d’apprendre de leurs travaux. Ceux-ci ont utilisé les informations en guise de base de comparaison ou de fondation pour de futurs projets. Microreact permet également aux chercheurs individuels de partager des informations à l’échelle mondiale et en temps réel.

L’intelligence collective pour résoudre les principaux problèmes de l’humanité

intelligence collective interconnexion big data

D’ici 2050, l’humanité devra vraisemblablement faire face à de nombreux problèmes. La montée des océans, le réchauffement climatique, la pénurie de ressources sont quelques-uns des défis que nous devrons relever. Pour y parvenir, nous pourrons et devrons nous en remettre à l’intelligence collective.

Grâce à l’essor des forums internet, des groupes de recherche, des pages wiki, des réseaux sociaux et de la blogosphère, de nouvelles méthodes de résolution de problèmes ont émergé. Les scientifiques perçoivent désormais internet comme un groupe de recherche commun. Ce mode d’apprentissage et de communication que l’on peut catégoriser en intelligence collective permet de déterminer des consensus de nombreux esprits pour trouver une réponse à des challenges complexes.

Gérer le changement climatique par ce prisme

intelligence collective écologie

Le centre pour l’intelligence collective du MIT développe un forum en ligne baptisé Climate Collaboratorium. Ce forum se présente comme un modèle informatique en perpétuelle évolution. Il représente l’atmosphère de la planète Terre et des systèmes humains. Il est alimenté par des salles de discussion scientifiques en ligne. Toutes les variables et les facteurs liés au climat, comment l’environnement, les interactions avec les humains et l’écologie sont inclus dans ce modèle évolutif.

Le professeur Thomas W. Malone, fondateur du centre, compare le Collaboratorium au Manhattan Project. Ce dernier a développé la bombe atomique pendant la Deuxième Guerre mondiale. La différence est que le Collaboratorium vise à résoudre un problème concernant tous les êtres humains. Grâce aux nouvelles technologies, à commencer par internet, il est possible de fédérer beaucoup plus de personnes que pendant la Deuxième Guerre mondiale.

En fin d’année 2014, le Climat CoLab comptait 33000 membres en provenance de plus de 150 pays. La NASA, la World Bank, l’Union of Concerned Scientists, de nombreuses universités et autres agences gouvernementales s’impliquent dans le projet. Le projet a pour but de cumuler toutes les possibilités des êtres humains pour lutter contre les changements climatiques. Les solutions sociales, politiques, économiques, et les solutions d’ingénierie sont passées en revue.

Développer de meilleurs jets avec l’intelligence collective

intelligence collective aviation

Boeing utilise le potentiel créatif de l’intelligence collective pour concevoir des jets. Le 787 Dreamliner a été créé en collaboration avec plus de 1000 partenaires proposant chacun leurs idées pour créer l’avion ultime. Ce jet, commercialisé en 2011, reprend des éléments du design du 777 combinés avec des matériaux composites comme le plastique renforcé à la fibre de carbone. Cela représente environ 50% de la structure principale pour remplacer l’aluminium. Ce jet a instauré un nouveau standard en termes d’efficience et de confort.

Afin d’accélérer le processus de design à moindre coût pour cet avion novateur, Boeing a décidé de s’en remettre à ses fournisseurs. Le Global Collaborative Environment (GCE) lie tous les membres de l’équipe de design du 787. Auparavant, Boeing a conçu 70% de l’avion. Puis, il a laissé ses 43 fournisseurs et de nombreux autres sous-traitants en provenance de 24 pays collaborer sur 135 sites. Pour l’avancée du projet, ces partenaires ont abandonné leurs systèmes de design assisté par ordinateur respectifs pour le langage et le format communs du système Catia V5 de Boeing. Grâce à ce programme standardisé de communication de données et de design, Boeing a réduit sa documentation de 2500 à 20 pages.

L’intelligence collective, les groupes de recherche en ligne, le stockage cloud et le Big Data sont les nouveaux moteurs de la pensée créative. Les problèmes complexes et critiques auxquels l’humanité est aujourd’hui confrontée nécessitent de déployer des solutions plus rapidement qu’autrefois. C’est pourquoi l’usage de ces technologies est désormais indispensable.

Seoul Innovation Challenge, un projet d’urbanisme basé sur l’intelligence collective

intelligence collective management exemple

Le Seoul Innovation Challenge vise à faire appel à l’intelligence collective pour résoudre les problèmes urbains rencontrés par Seoul dans les domaines de la sécurité, de l’environnement et du trafic. Ce challenge s’est déroulé pendant 200 jours, et était ouvert aux citoyens, aux étrangers, aux entreprises et aux universités. Les maîtres mots sont la coopération, l’innovation, et l’ouverture. Lorsque quelqu’un suggérait une idée novatrice sur la plateforme, tous les participants se sont lancés dans un processus collaboratif auprès de 100 mentors professionnels dans les 7 mois suivants.

L’étape préliminaire a eu lieu en juillet 2017. 32 projets furent choisis. Pendant les trois mois suivants, les idées ont été développés. L’étape finale a lieu à la fin du mois de novembre. Un soutien sera apporté aux 32 projets en vue d’une commercialisation. L’enregistrement de propriété intellectuelle, la démonstration et la mise en place de partenariats, ainsi que la recherche d’investisseurs seront assistées. Il était possible de s’inscrire à ce projet d’intelligence collective sur le site officiel.

L’intelligence collective peut-elle surpasser l’intelligence artificielle ?

intelligence collective vs intelligence artificielle

Le philosophe français Pierre Lévy, spécialisé dans l’intelligence collective, développe depuis 2015 un logiciel permettant d’analyser les données en provenance des réseaux sociaux. Il veut comprendre les motivations réelles des humains. Ce logiciel permet de transformer automatiquement les mots de divers langages dans un hyper-langage algorithmique symbolique intitulé « Information Economy MetaLanguage ».

Selon l’auteur de « L’intelligence collective : Pour une anthropologie du cyberespace » , elle se retrouve initialement dans la nature. Cependant, grâce au langage et à la technologie, celle associée aux humains s’avère largement supérieure. Pourquoi ? Parce qu’elle repose sur la manipulation de symboles. Nous sommes entrés dans l’ère de la manipulation algorithmique des symboles.

Pour le philosophe, cette forme collective d’intelligence s’oppose à l’intelligence artificielle. Son objectif n’est pas de rendre les ordinateurs plus intelligents, mais d’utiliser les ordinateurs pour rendre les humains plus intelligents. Pour ce faire, Lévy compte créer un système de catégorisation universel aussi souple que le langage naturel. Il servira à classifier les innombrables données disponibles sur le web. La relation sémantique détermine l’orchestration des données. Ce système permettra l’émergence de nouvelles idées. En somme, l’IEML permet de connecter les idées grâce à l’informatique. Le philosophe considère qu’internet représente déjà une forme d’intelligence de ce type, mais souhaite y ajouter une notion de réflexion.

intelligence collective cloud

Conscient que ce système pourrait permettre à de grandes entreprises comme Apple et Google, ou à des agences gouvernementales comme la NSA d’accéder à une quantité d’informations d’une ampleur inédite, Pierre Lévy précise toutefois que son objectif est d’offrir le pouvoir de l’information au peuple. Tout comme les activistes de la Silicon Valley dans les années 70, qui souhaitaient permettre à tout un chacun d’accéder à l’informatique, ce philosophe souhaite donner au peuple la possibilité d’analyser et de donner sens aux données disponibles sur internet.

Intelligence collective : un logiciel crée par Pierre Levy

Pour ce faire, Pierre Lévy compte s’appuyer sur deux outils. Le premier outil est le langage IEML, et le second est le software qui implémente ce langage. Les entreprises distribuent ce logiciel de manière ouverte et gratuite. Il dispose de la troisième version de GPL. Enfin, tous les changements apportés àl’IEML doivent être totalement transparents.

Certes, tout le monde ne peut pas contribuer au dictionnaire IEML, car il s’agit d’un savoir spécialisé. Des compétences linguistiques, ou des connaissances mathématiques, sont indispensables. Cependant, tous le monde peut créer de nouvelles étiquettes. Le philosophe considère que c’est le maximum que l’on puisse faire pour permettre aux gens d’accéder à la liberté. On ne peut forcer les gens à être libres, mais on peut leur donner tous les outils nécessaires pour s’émanciper.

Le Big Data favorise-t-il la formation à l’Intelligence collective ?

La formation à l’ intelligence collective fait de plus en plus d’émule. Les entreprises veulent faire en sorte que leurs employés mettent en commun leurs ressources cognitives afin de progresser vers un même but qui peut généralement se résumer à l’augmentation du profit de l’entreprise. En ce sens, certaines organisations prennent part à des séminaires d’équipe. Il s’agit de favoriser cette réflexion collective, puisqu’elle basée sur l’obtention d’une cohésion de groupe.

Le Big Data couplé à l’intelligence artificielle peut faciliter cette phase de formation. En effet, les algorithmes de correspondance peuvent trouver les points communs entre des membres d’une équipe et ainsi favoriser les interactions fructueuses. La bonne entente n’est pas le seul élément d’une bonne intelligence collective. Il faut également savoir qu’un individu travail sur le même projet que vous. Dans une grande entreprise répartie à différents endroits de la planète, cela n’est pas forcément évident. Un lac de données réparties dans un Cloud et les outils d’accès en entreprise peuvent donner cette information.

Dans une optique de formation, le Big Data apporterait dans l’idéal des moyens de mesurer la progression des membres d’un séminaire. Il s’agirait d’évaluer les interactions dans le groupe, et même de prédire a quel point sera amélioré la collaboration dans l’entreprise.

Source : https://blog.hypermind.com/2015/01/28/the-role-of-collective-intelligence-in-the-age-of-big-data/

Cet article Intelligence collective et Big Data : définition, exemples en entreprise a été publié sur LeBigData.fr.

24 Feb 06:56

Repost: Sending bulk email (mail merge) using Gmail and Google Sheets

by Martin Hawksey

A repost of Create a mail merge using Gmail and Google Sheets which I contributed to the G Suite Solutions Gallery

Simplify the process of producing visually rich mail merges using Gmail and combining it with data from Google Sheets. With this solution you can automatically populate an email template created as a Gmail draft with data from Google Sheets. Merged emails are sent from your Gmail account allowing you to respond to recipient replies.

Technology highlights

Try it

  1. Create a copy of the sample Gmail/Sheets Mail Merge spreadsheet.
  2. Update the Recipients column with email addresses you would like to use in the mail merge
  3. Create a draft message in your Gmail account using markers like {{First name}}, which correspond to column names, to indicate text you’d like to be replaced with data from the copied spreadsheet.
  4. Click on custom menu item Mail Merge > Send Emails.
  5. A dialog box will appear and tell you that the script requires authorization. Read the authorization notice and continue.
  6. When prompted enter or copy/paste the subject line used in your draft Gmail message and click OK
  7. The Email Sent column will update with the message status.

Next steps

Additional columns can be added to the spreadsheet with other data you would like to use. Using the {{}} annotation and including your column name as part of your Gmail draft will allow you to include other data from your spreadsheet. If you change the name of the Recipient or Email Sent columns this will need to be updated by opening Tools > Script Editor.

For more information on the number of email recipients that can be contacted per day you can read the Current Quotas documentation. If you would like to find out more about the coding pattern used to conditionally read and write Google Sheets data here is a related blog post.

To learn more about Google Apps Script, try out the codelab which guides you through the creation of your first script.

You can also view the full source code of this solution on GitHub to learn more about how it was built.

22 Feb 07:27

Gift Guide: Gifts for the promising podcaster

by Brian Heater

Welcome to TechCrunch’s 2019 Holiday Gift Guide! Need help with gift ideas? We’re here to help! We’ll be rolling out gift guides from now through the end of December. You can find our other guides right here.

Spotify reportedly spent nearly $500 million on podcasts in 2019. The good news is that the rest of us can get into that world for considerably less. In fact, the low barrier of entry has always been one of podcasting’s primary selling points.

Before we go any further, I’d recommend everyone check out our on-going series “How I Podcast,” in which top podcasters give a peek behind the curtain at their podcasting rigs. The standard disclaimer applies here, as ever: there’s no one size fits all solution to any of this. One’s needs will vary greatly depending on how much you’re willing to spend and what the recording setup is (remote vs. in-person, the number of guests you usually have, etc.)

If you’re just getting started, just start. You don’t need high end mics or mixing boards — even if you’re just recording into your iPhone, it’s better to get the ball rolling than to worry about perfect fidelity right off the bat.

But for you or anyone on your list who’s looking to get a bit more serious about podcasting in 2020, this should be a good place to start. It’s easier than ever to make a show sound professional, one upgrade at a time. What follows is a selection of software and gear for anyone looking to step up their game.

(Oh, and while we’re talking about podcasts… check out my weekly interview show, RiYL)

This article contains links to affiliate partners where available. When you buy through these links, TechCrunch may earn an affiliate commission.

Zencastr Subscription

 

There are a ton of different compelling software choices for today’s podcaster, including Spotify’s Anchor for real beginners, up to Adobe’s Premier for the pros. For remote recorders, I recommend Zencastr. Our own Original Content podcast uses the software, and I’ve had pretty good experiences with its real-time audio levels and cloud-based recording. Gone are the days of hacking something together out of Skype calls.

Price: $20 per month

Rodecaster Pro

 

Introduced last year, the Rodecaster Pro is the most expensive item on the list, but it also just might be the most indispensable for anyone looking to set up an at-home studio. It’s a brilliant little multitrack board, and quite frankly, I’m surprised there isn’t more competition in this space yet. For the beginning podcaster up through everyone who’s ready to sign a contract with NPR, the Rodecaster is a terrific, user-friendly solution for recording more than two people face-to-face.

Price: $599 on Amazon

Zoom H4N PRO Digital Multitrack Recorder

 

When my Tascam finally gave up the ghost earlier this year, I decided to try something new. I’m glad I did. While it’s true that most of these multitrack records haven’t changed much in the past decade, Zoom offers a couple of key advantages. Most notable is far better real-time level tracking. I produce my podcast on the fly as I’m recording, and the ability to quickly monitor volume at a glance is paramount. I take the H4N with me wherever I travel, along with a pair of external mics.

Price: $219 on Amazon

AKG Lyra

 

Logitech’s Blue has had the USB market cornered for some time now, but Samsung-owned AKG offers compelling alternatives at an even more compelling price. The $149 Lyra is certainly the best looking of the bunch. It’s got a USB-C input, real-time monitoring and far clearer settings for a variety of different recording methods. I’ve been playing around with the mic a bit and will offer a more thorough writeup soon, but in the meantime, I can attest that it’s a great sounding mic for remote recordings.

Price: $150 on Amazon

Blue Raspberry

The Lyra’s biggest drawback, however, is its size. Blue’s Raspberry can’t compete on the sound front, but it’s far more portable. More than once I’ve found myself sticking it in a backpack and a suitcase. Blue also offers up a mini-version of the Yeti at a fraction of the price, but this older Blue mic simply sounds better.

Price: $149 on Amazon

Shure SM7B

At over twice the price, the Shure SM7B is a bigger commitment than the previous options. But as the choice of pro-level podcasters all over, Shure’s mics are a studio gold standard. The more portable SM-57s are also a terrific (and lower cost) option for more portable rigs. You’ll get a great sounding show either way.

Price: $399 on Amazon

Sennheiser Momentum

Whether it’s for editing or just minimizing echoes during interviews, you’ll want a good pair of headphones. There’s no shortage of over/on-ear options, but I’m partial to these Sennheisers for their combination of sound, price and classic good looks.

Price: $199 on Amazon

22 Feb 07:25

Landscapes Turned Upside Down

by Pauline

À l’occasion du lancement de la ligne Dreamliner de la compagnie aérienne United Airlines en Australie, le studio Cream Electric Art établi à Sydney a réalisé une campagne publicitaire qui renverse le monde. À travers une série d’affiches, ces photomontages offrent une vue surprenante et inédite : à la manière d’un livre pop-up, les paysages occupent à la fois le premier plan et l’horizon, cette fois ci vus du ciel. Intitulée “Dreamers Welcome”, la campagne a été dirigée par le directeur créatif Cameron Hearne, avec le photographe Jeffrey Milstein.

22 Feb 07:21

I stumbled across a huge Airbnb scam that’s taking over London

by James Temperton
22 Feb 07:21

How to hack your morning routine to get things done

by Laurie Clarke
Forget celebrity routines. If you want to have a truly productive day, follow these tips
22 Feb 07:20

Airbnb has devoured London – and here’s the data that proves it

by James Temperton
More than 10,000 Airbnb listings in London are seemingly in breach of the city’s 90-day limit on short-term rentals, according to new research
22 Feb 07:18

Heston Blumenthal wants robots to make your boring lunch

by Amit Katwala
The experimental chef has joined the board of Karakuri, a startup trying to automate the mass customisation of sandwiches and salads
22 Feb 07:18

How Citymapper deals with the chaos of the world's cities

by Victoria Turk
Rolling out in new countries isn't just a matter of translation, says Citymapper founder Azmat Yusuf
22 Feb 07:18

As electric car sales soar, the industry faces a cobalt crisis

by Nicole Kobie
First it was lithium, now it's cobalt. Electric vehicles need them for batteries, but supply issues will only worsen as demand rises
22 Feb 07:17

Who will really benefit from the EU's big data plan?

by Gian Volpicelli
Europe wants to force companies to pool their industrial data. It's unclear what the end game is
22 Feb 07:16

Shopify décide de rejoindre le projet de crypto-monnaie Libra

by Hadrien Augusto

Libra cryptomonnaie Facebook

Shopify rejoint le projet de monnaie virtuelle de Facebook, pour « construire un réseau de paiement qui facilite l’accès à l’argent ».
22 Feb 07:16

Lancement imminent pour cette plateforme de streaming soutenue par 2 milliardaires

by Arthur Vera

Quibi

Tremble Netflix, ce petit nouveau compte frapper fort dès le début et dispose de soutien de taille.
15 Feb 20:13

The Revolutionary Role of AI in B2B Marketing

by draab@raabassociates.com (david raab)

July 18, 2019

The Revolutionary Role of AI in B2B Marketing

By Greg Peverill-Conti, Zylotech

190718 Zylotech The revolutionary role of AI in B2B marketing pic 1.png

Over the years, I’ve had the opportunity to work with a number of companies that lie at the intersection of audiences, data, and technology. As time has passed, many of the challenges associated with using data to reach potential customers have been solved. Amassing audience information? Check. Recognizing a prospect across platforms and devices? Check. Integrating information and insights into a range of sales and marketing technologies? Check, again. Many of the fundamental problems of data-driven marketing have been recognized and addressed.

Now, we’re dealing with harder problems. Which isn’t to say the earlier challenges were easy, but rather that the new problems weren’t initially foreseen. For example, now that marketers have access to so much data, how can they evaluate its quality? How is data quality maintained over time? How can it inform smart planning, support decision-making, and fuel marketing programs that are engaging but not creepy?

Much of the thinking around using audience data comes from the B2C world, whose brands and marketers were among the first to see the potential for reaching and engaging customers online and through social channels. That is rapidly changing and more B2B brands are not only thinking about their customer data, but are also rethinking the ways they put that data to work.

190718 Zylotech The revolutionary role of AI in B2B marketing pic 2.png

Companies like VanillaSoft are making it easier for inside sales teams to use social channels to reach business buyers - and set up a cadence and process for efficient interactions. Even just a few years ago, the idea of reaching a B2B purchaser via SMS or Facebook or Twitter would have raised eyebrows. Now, they have become part of the daily routine and are recognized as valid channels for engagement.

This type of engagement depends on having accurate contact data, the type supplied by companies like DiscoverOrg. They do a great job of constantly vetting and refreshing their information but contact is just one piece of the puzzle; what about the rest of a marketing team’s data? That data also needs to be evaluated and refreshed on a constant basis, while also bearing in mind customer privacy, their preferences for engagement, and what approaches are most likely to be effective.

The challenge is that the current volume and velocity of data simply can’t be managed manually. To be successful, marketers need to rely on AI and machine learning to do the heavy lifting. While in its early stages, this approach is already paying dividends to forward looking marketing organizations.

These systems can do so much more than simply ensure data quality. Technologies, like those from Zylotech - a self-learning customer data platform that enriches audience data, predicts purchase behavior, and helps increase sales - are able to ingest, normalize, append, and segment customer data in new and interesting ways. These platforms are becoming incredibly sophisticated, transforming audience data into customer intelligence and putting that intelligence to work for sound business results.

That jump is a game-changer. For the first time, B2B marketers have an intelligent virtual ally in their corner. This doesn’t mean that marketers can take a hands-off, automated approach - not by a long shot. Marketers need to apply their own intelligence and creativity to develop campaigns and programs that harness machine learning and AI while preserving customer trust. It’s a fine line and one that is constantly shifting. Thankfully, there is a feedback loop that allows marketers to see what’s working or what might be causing prospects to flee.

AI can learn those business engagement boundaries, but human intelligence needs to trace their outlines for the AI to learn and operate effectively. That’s a higher level function for both marketers and AI. It may sound daunting, but it will soon come to seem obvious. Machine learning and AI - coupled with our own creativity and intelligence - will have a huge upside and are poised to rewrite the rules of marketing.

30 Dec 21:23

Bientôt un Uber ou un Airbnb coopératif et éthique ?

Les plateformes coopératives peinent encore à se faire connaître pour rencontrer une large audience. Décryptage d’un modèle qui commence tout juste à émerger.
12 Dec 21:14

Your Search Diet - How Not to Over EAT

by jennyhalasz@slideshare.net(jennyhalasz)

Like being healthy in life, being healthy in business requires balance and a full understanding of the elements involved.
11 Nov 14:00

Gartner: Blockchain Tech Used by Enterprises at Risk of Becoming Obsolete Within 18 Months

by Cointelegraph By Thomas Simms

Research firm Gartner has warned 90% of the blockchain technology used by enterprises will need to be replaced within the next 18 months

11 Nov 14:00

Internet Authority: History of Centralized Companies Being Hostile Toward Crypto

by Cointelegraph By Henry Linver

Centralized companies can still adversely impact crypto — the community reacts: “There is a solution to protect ourselves against potential abuse”

09 Sep 20:32

Building a Direct-to-Consumer Strategy Without Alienating Your Distributors

by Ned Calder
Three Images/Getty Images

Companies increasingly use digital technologies to circumvent distributors and enter into direct relationships with their end-users. These relationships can create efficient new sales channels and powerful feedback mechanisms or unlock entirely new business models. But they also risk alienating the longstanding partners that companies count on for their core business.

The auto industry is a case in point. Porsche’s Passport program allows consumers to subscribe via a phone app to a range of vehicles for a fixed monthly fee. Your chosen Porsche is delivered to your house with insurance and maintenance as well as unlimited miles and flips to other models included. But if you’re a Porsche dealer, how do you like this idea? Now consider that similar subscription services are being offered by Volvo, Lincoln, BMW, and Mercedes, with more to follow.

These direct-to-consumer offers threaten the very livelihood of dealerships, who historically have owned the customer relationship. And many dealers are pushing back. The California New Car Dealers Association lobbied for a law that required subscriptions to go through dealers. Volvo’s program has elicited so much criticism that dealers have mobilized the Indiana state legislature to outlaw the business model.

This is but one example of the digital Catch-22, the dilemma that most manufacturers and service companies face when creating new distribution channels. As a result, many B2B companies remain stuck in a stalemate. Writing in the Sloan Management Review, Boston College professor Gerald Kane noted that 87% of executives surveyed indicated that digital technologies will disrupt their industries to a great or moderate extent. Yet fewer than half felt that their companies were doing enough to address this disruption.

We frequently find that executive teams understand the potential of a reinvented distribution strategy; however, they are unclear on how to proceed. While the opportunity is compelling, so is the potential to upset existing distribution partners and thereby damage the core business. Disgruntled distribution partners may retaliate in ways such as switching to rivals, favoring competing products, or even lobbying for legislative remedies.

How can companies position for the future without putting their current business in jeopardy? Here are three strategies for developing digital distribution approaches that minimize risk:

Embrace Stealth

In the past, companies looking to test new business models could quietly enter a new geography free from restrictive distribution contracts that limit their ability to go direct in their traditional geographies. But that is harder to do in the digital age, as customers and partners anywhere can easily see what you’re doing online.

Alternatively, the company can operate in stealth mode by targeting customer segments that have been poorly served or ignored by traditional distributors.

Recently, Verizon quietly launched a startup called Visible which offers no-contract mobile phone service subscriptions for a $40 flat fee and is only available for purchase through an app. This model competes mainly with smaller-brand, low-end providers and may not be seen as a direct threat by Verizon’s massive distribution network of company-owned, partner, and authorized reseller stores that are selling higher-margin services.

Sometimes, an entirely new product provides the right entry point. Starting in 2011, Mercedes chose to develop direct distribution capabilities for electric bicycle sales under its Smart brand.

Mercedes’ strategy preserves its traditional distribution network for its major lines of vehicles, while enabling the company to build the capabilities and infrastructure needed to support a reinvented distribution strategy — selling to consumers rather than through traditional dealerships.

Create Hooks

Distribution partners willingness to retaliate can be minimized if companies are able to create hooks that compel and reduce their negotiating leverage. There are many ways to build hooks, including bundling products, monopolizing a category, or developing features that are indispensable to a subset of customers.

For example, Cree Inc. made a splash when it introduced affordable consumer LED lightbulbs in the early 2010s. For several years the company was both a cost and product feature leader in the category. This enabled Cree to command significant shelf space in Home Depot, while simultaneously building a direct-to-consumer business. During this period, Home Depot was compelled to carry Cree products. This dual distribution strategy resonated with both consumers and investors — as Cree’s stock price tripled from 2011 to 2013.

In 2012, with the launch of the Surface product line, Microsoft began directly competing with the manufacturers and OEMs who had been its distribution partners for decades. Microsoft was able to do so largely due to its monopolization of the desktop operating system market. Traditional Microsoft partners such as Acer, Lenovo, HP, and Dell were already hooked on Windows and had little choice but to accept Microsoft’s direct-to-consumer strategy.

In fact, many of Microsoft’s partners, at least publicly, were supportive of the Surface. In 2012, Acer’s founder, Stan Stinh, indicated that he believed the Surface was only intended to stimulate market demand and that “once the purpose [was] realized, Microsoft [would] offer more models.” Today, the Surface product line has a greater share than Acer does in the U.S. market for personal computers.

Minimize Pain

Supporting downstream partners’ business can also reduce the risk of retaliation.

The heavy equipment manufacturer Caterpillar, for example, introduced a vehicle management platform that provides customers with insights on vehicle utilization, health, and location. The platform is sold directly to customers — frequently removing downstream partners from the sales process. Ultimately, though, the platform benefits partners because it alerts customers when they need to get their equipment serviced by these local partners — a key revenue stream for Caterpillar’s distributors.

UnitedHealth Group, one of the largest health insurers in the U.S., is on the verge of becoming the nation’s largest employer of physicians. But under its subsidiary Optum, UnitedHealth Group has pursued an aggressive M&A strategy to build its direct-to-consumer capabilities while being careful to not upset traditional healthcare providers. For example, Optum has continued to accept over 80 types of health insurances across its facilities and has avoided restricting United insurance customers to Optum-owned providers. Optum’s deliberate strategy has caught the industry’s attention, but to date has avoided direct retaliatory actions by incumbent healthcare providers.

Digital represents a significant opportunity for many B2B companies, but also risk. Failure to act enables competitors and new entrants, while action risks retaliation from existing partners. To break this stalemate, leadership should align on the imperative to act, acknowledge the risks of action, and identify the right strategy with which to move ahead. Your long-term partners are more likely to stand by you if they see your direct-to-consumer move not as an act of aggression but as a plan for growth.

09 Sep 20:24

5G’s Potential, and Why Businesses Should Start Preparing for It

by Omar Abbosh

The technology will allow for a range of new products and services.

15 Aug 13:32

MusicBrainz Server update, 2019-08-08

by yvanzo

This summery release brings one main new feature: collaborative collections! As an editor, you can now share your collections with others. This is mainly intended for community projects, but it can also be a good way to, say, have a shared “Music we have at home” collection with your family, or collect artists with funny names with your friends. You decide how to use it!

To add collaborators to your collections, edit the collection and enter the editors you’d want as collaborators in the appropriate section (suggestion: ask first whether they’re interested, then add them!). Once they’ve been added as collaborators, they’ll be able to add and remove entities from the collection in the same way as you, but they won’t be able to change the title / description: that’s still only for the collection owner to change.

CDs collection shared as a cloak for everyone to see

The release also comes with a bunch of small improvements and bug fixes, including a couple about collections, and continues migrating to React.

Thanks to Ge0rg3 and sothotalker for their contributed code. Also, thanks to chaban, chiark, cyberskull, Dmitry, hibiscuskazeneko, jesus2099, Lotheric, mfmeulenbelt, psychoadept and everyone who tested the beta version, reported issues, or updated the website translations.

The git tag is v-2019-08-08.

Bug

  • [MBS-8867] – Guess Case normalizes “C’mon” as “C’Mon”
  • [MBS-9512] – Changing recording name to empty string should not be allowed
  • [MBS-10100] – ISE without “non-required” attributes for admin/attributes/Language/create
  • [MBS-10133] – Error message when sending an empty query to the WS is unclear
  • [MBS-10212] – SoundCloud URL with trailing slash is not displayed with user name in artist sidebar
  • [MBS-10218] – Regression: Cover Art tab not selected / highlit on release page
  • [MBS-10233] – Regression: ISE when trying to cancel a “add release annotation” edit

Improvement

  • [MBS-8569] – Don’t display ended legal names in the overview page for artists
  • [MBS-9381] – Show user’s own private collections in the list of collections for an entity
  • [MBS-10135] – Support WikiaParoles as its own site rather than LyricWiki
  • [MBS-10139] – Clarify why recording lengths can’t be edited when non standalone
  • [MBS-10210] – Only allow allowed frequencies in language admin form
  • [MBS-10215] – Make ISO number required for script admin form
  • [MBS-10217] – Explain what renaming artist credits does when editing artist
  • [MBS-10219] – Add Muziekweb to other DBs whitelist, with sidebar display
  • [MBS-10222] – Pull legal name alias instead of legal name artist for the relationship Artist-Artist “perform as/legal name”
  • [MBS-10224] – Don’t show the same legal name string multiple times in artist overview
  • [MBS-10246] – Don’t assume all event collections are attendance lists
  • [MBS-10272] – Convert the header / navbar to Bootstrap

New Feature

  • [MBS-8915] – Allow editors to choose delimiter in track parser
  • [MBS-9428] – Allow multiple users to share one collection

React Conversion Task

  • [MBS-9914] – Convert the area public pages to React
  • [MBS-10047] – Convert /oauth2/ pages to React

Other Task

  • [MBS-10131] – Update LyricWiki domain to lyrics.fandom.com

31 Jul 22:16

Blockchain Governance: How Boundaries Can Help the Blockchain to Scale

by Jenny Scribani

Blockchain Governance: How Boundaries Can Help the Blockchain to Scale

How Boundaries Can Help the Blockchain to Scale

The blockchain offers a long overdue upgrade for our changing economy.

However, the world isn’t quite ready for broadscale blockchain adoption. The technology is still in its relative infancy, and to reach its true potential the blockchain must be able to successfully replace existing systems while also operating at meaningful scale.

Today’s infographic comes to us from eXeBlock Technology, and it explores how good blockchain governance can help solve the pressing challenges around blockchain adoption and implementation, including the ever-present issue of scalability.

So You Say You Want A Blockchain

While it’s relatively easy to implement a blockchain in an organization, it’s far more difficult to decide just how that network should operate. For a blockchain to generate and hold any real competitive advantage, there are a few key questions to consider:

Scalability
How big can you grow before sacrificing efficiency? As the blockchain grows, so do the number of nodes to process transactions. This creates a bottleneck and slows down the system.

Privacy
What are your privacy needs? The attraction of the blockchain lies in its ability to decentralize information and make it transparent, but this creates a challenge for corporations who use the blockchain to handle sensitive or proprietary information.

Interoperability
Will your blockchain play nicely with other blockchains? There are a number of blockchain configurations – and to date, no cross-industry standards. This means your blockchain might not collaborate smoothly with another blockchain, particularly if the security standards are mismatched.

How Can Blockchain Governance Help?

Blockchain governance is concerned with solving these problems by:

  • Reducing scalability obstacles by finding ways for blockchains to reach consensus faster without sacrificing decentralization
  • Providing a foundation for shared standards, so organizations can collaborate without risking the privacy of their data
  • Providing a framework for adaptability – a playbook for the blockchain to rely on when inevitable problems and security issues crop up

Think of governance as a constitution to help the blockchain run smoothly: it improves efficiency, encourages collaboration, and outlines a course of action when the system falters.

Types of Blockchains

There are four different types of blockchains, each with unique characteristics:

Federated

  • Operates under the leadership of a group, and access is limited to only members of the group
  • Due to limited membership, they are faster, can scale higher, and offer more transaction privacy

Permissioned/private

  • Access might be public or restricted, but only a few users are given permission to view and verify transactions
  • Ideal for database management or auditing services, where data privacy is an issue
  • Compliance can be automated, as the organization has control over the code

Permissionless/public

  • Open-source and available to the public
  • Transactions are transparent to anyone on the network with a block viewer, but anonymous.
  • The ultimate democracy – this fully distributed ledger disrupts current business models by removing the middleman
  • Minimal costs involved: no need to maintain servers or system admins

Hybrid

  • A public blockchain, which hosts a private network with restricted participation
  • The private network generates blocks of hashed data stored on the public blockchain, but without sacrificing data privacy
  • Flexible control over what data is kept private and what is shared on the public ledger
  • Hybrid blockchains offer the benefits of decentralisation and scalability, without requiring consensus from every single node on the network

Within each of these systems, blockchain governance outlines different standards for privacy and security. Governance determines how consensus is reached, and how many nodes are required. It establishes who has access to what information, and how that data is encrypted. Governance sets up the foundations for blockchains to scale according to the needs of the organization.

Blockchain governance exists to smooth the transition to widespread adoption, providing organizations with dynamic solutions to make their blockchain suit their needs without sacrificing the security of decentralization.

Subscribe to Visual Capitalist

Get your mind blown on a daily basis:

Thank you!
Given email address is already subscribed, thank you!
Please provide a valid email address.
Please complete the CAPTCHA.
Oops. Something went wrong. Please try again later.
Follow Visual Capitalist on Twitter
Like Visual Capitalist on Facebook
Follow Visual Capitalist on LinkedIn

 

The Visual Capitalist Book is now available on Amazon

The Money Project

All the World's Money and Markets in One Visualization
The War on Cash
Trump's Entire Financial History Video
Currency and the Collapse of the Roman Empire
Buying Power of the U.S. Dollar Over the Last Century

Embed This Image On Your Site (copy code below):

Courtesy of: Visual Capitalist

The post Blockchain Governance: How Boundaries Can Help the Blockchain to Scale appeared first on Visual Capitalist.

31 Jul 21:19

Data Analysis 9: Data Regression - Computerphile

by Computerphile

Real life doesn't fit into neat categories - Dr Mike Pound on some different ways to regress your data. This is part 9 of the Data Analysis Learning Playlist: https://www.youtube.com/playlist?list=PLzH6n4zXuckpfMu_4Ff8E7Z1behQks5ba

This Learning Playlist was designed by Dr Mercedes Torres-Torres & Dr Michael Pound of the University of Nottingham Computer Science Department. Find out more about Computer Science at Nottingham here: https://bit.ly/2IqwtNg

This series was made possible by sponsorship from by Google.

https://www.facebook.com/computerphile
https://twitter.com/computer_phile

This video was filmed and edited by Sean Riley.

Computer Science at the University of Nottingham: https://bit.ly/nottscomputer

Computerphile is a sister project to Brady Haran's Numberphile. More at http://www.bradyharan.com
31 Jul 21:19

Data Analysis 6: Principal Component Analysis (PCA) - Computerphile

by Computerphile

PCA - Principle Component Analysis - finally explained in an accessible way, thanks to Dr Mike Pound. This is part 6 of the Data Analysis Learning Playlist: https://www.youtube.com/playlist?list=PLzH6n4zXuckpfMu_4Ff8E7Z1behQks5ba

This Learning Playlist was designed by Dr Mercedes Torres-Torres & Dr Michael Pound of the University of Nottingham Computer Science Department. Find out more about Computer Science at Nottingham here: https://bit.ly/2IqwtNg

This series was made possible by sponsorship from by Google.

The music dataset can be found here: https://github.com/mdeff/fma

https://www.facebook.com/computerphile
https://twitter.com/computer_phile

This video was filmed and edited by Sean Riley.

Computer Science at the University of Nottingham: https://bit.ly/nottscomputer

Computerphile is a sister project to Brady Haran's Numberphile. More at http://www.bradyharan.com
20 Jul 09:44

Flash - L’ADN, votre nouveau support de stockage de données

L’ADN est en bonne voie pour devenir le support de stockage de données de demain. C’est notamment la startup Catalog qui travaille sur le sujet et pour démontrer leur innovation, ils ont copié l’intégralité du Wikipédia anglophone dans de l’ADN.


N’hésitez pas à vous abonner au podcast, et surtout à le partager autour de vous ! 

Site : www.anti-brouillard.fr 

Instagram : www.instagram.com/antibrouillard/

Twitter : www.twitter.com/Anti_brouillard 

Facebook : www.facebook.com/anti.brouillard.podcast/ 

Email : anti.brouillard.podcast@gmail.com

Et mon LinkedIn perso : www.linkedin.com/in/fabienroques/ 

___

Producteur et hôte : Fabien Roques

Crédit musique : Music by Joakim Karud youtube.com/joakimkarud

Crédit logo : Axel Delbrayère - http://delbrayere.com/  

06 Jul 18:28

The 15 Biggest Data Breaches in the Last 15 Years

by Jeff Desjardins

There’s no doubt that data breaches are a primary concern for people on the technological side of any modern business.

However, it’s increasingly the case that C-suite executives are catching wind of the potential business ramifications that these breaches can trigger.

In 2013, for example, the hacking of Yahoo not only compromised three billion email accounts – it also nearly jeopardized Verizon’s bid to acquire the company for $4.8 billion. At the end of the day, experts say that the breach knocked $350 million off of the sale price of Yahoo.

Counting Down the Breaches

Today’s infographic comes to us from Hosting Tribunal, and it highlights the biggest data breaches over the last 15 years.

The 15 Biggest Data Breaches in the Last 15 Years

Did you know that a whopping 14,717,618,286 records have been stolen since 2013?

It’s part of a much larger problem, and some experts anticipate that by 2021 the cost of cybercrime to the global economy will eclipse $6 trillion – a potential impact that would even supersede the size of the current Japanese economy ($4.9 trillion).

The 15 Biggest Data Breaches

Here are the most notable breaches that have occurred over the last 15 years, in ascending chronological order:

Year Company Impact
2004 AOL 92 million screen names and email addresses stolen
2013 Yahoo All 3 billion accounts compromised
2013 Target 110 million compromised accounts, incl. 40 million payment credentials
2014 eBay 145 million compromised accounts
2015 Anthem Inc 80 million company records were hacked, including Social Security numbers
2016 LinkedIn 117 million emails and passwords leaked
2016 MySpace 360 million compromised accounts
2016 Three 133,827 compromised accounts, including payment methods
2017 Equifax 143 million accounts exposed, including 209k credit card numbers
2016 Uber 57 million compromised accounts
2018 Marriott 500 million compromised accounts
2018 Cathay Pacific 9.4 million compromised accounts, including 860k passport numbers
2018 Facebook 50 million compromised accounts
2018 Quora 100 million compromised accounts
2018 Blank Media 7.6 million compromised accounts

Most of these breaches led to millions, or even billions, of records being compromised.

And while the motives behind cyberattacks can vary from case to case, the business impact of hacks at this scale should make any executive tremble.

Subscribe to Visual Capitalist

Get your mind blown on a daily basis:

Thank you!
Given email address is already subscribed, thank you!
Please provide a valid email address.
Please complete the CAPTCHA.
Oops. Something went wrong. Please try again later.

The post The 15 Biggest Data Breaches in the Last 15 Years appeared first on Visual Capitalist.

30 May 21:40

McDonald's Turns to AI for a Better Burger

by Claire Carroll

While Burger King may still be poking fun at AI following their robot commercials last year, other burger titans embrace AI as the next step in their evolution. This year, Wendy’s announced that it’s adding $25 million to its digital budget, while McDonald's took things to the next level when they purchased a marketing AI startup for over $300 million, their largest acquisition this century. Burger joints are leveling up their technology stack to drive seamless user experiences.

29 May 22:41

Hollywood is quietly using AI to help decide which movies to make

by James Vincent

AI will tell you who to cast and predict how much money you’ll make

The film world is full of intriguing what-ifs. Will Smith famously turned down the role of Neo in The Matrix. Nicolas Cage was cast as the lead in Tim Burton’s Superman Lives, but he only had time to try on the costume before the film was canned. Actors and directors are forever glancing off projects that never get made or that get made by someone else, and fans are left wondering what might have been.

For the people who make money from movies, that isn’t good enough.

If casting Alicia Vikander instead of Gal Gadot is the difference between a flop and smash hit, they want to know. If a movie that bombs in the US would have set box office records across Europe, they want to know. And now, artificial intelligence can tell them.

artificial intelligence turns filmmaking into fantasy football

Los Angeles-based startup Cinelytic is one of the many companies promising that AI will be a wise producer. It licenses historical data about movie performances over the years, then cross-references it with information about films’ themes and key talent, using machine learning to tease out hidden patterns in the data. Its software lets customers play fantasy football with their movie, inputting a script and a cast, then swapping one actor for another to see how this affects a film’s projected box office.

Say you have a summer blockbuster in the works with Emma Watson in the lead role, says Cinelytic co-founder and CEO Tobias Queisser. You could use Cinelytic’s software to see how changing her for Jennifer Lawrence might change the film’s box office performance.

“You can compare them separately, compare them in the package. Model out both scenarios with Emma Watson and Jennifer Lawrence, and see, for this particular film … which has better implications for different territories,” Queisser tells The Verge.

Cinelytic isn’t the only company hoping to apply AI to the business of film. In recent years, a bevy of firms has sprung up promising similar insights. Belgium’s ScriptBook, founded in 2015, says its algorithms can predict a movie’s success just by analyzing its script. Israeli startup Vault, founded the same year, promises clients that it can predict which demographics will watch their films by tracking (among other things) how its trailers are received online. Another company called Pilot offers similar analyses, promising it can forecast box office revenues up to 18 months before a film’s launch with “unrivaled accuracy.”

The water is so warm, even established companies are jumping in. Last November, 20th Century Fox explained how it used AI to detect objects and scenes within a trailer and then predict which “micro-segment” of an audience would find the film most appealing.

Looking at the research, 20th Century Fox’s methods seem a little hit or miss. (Analyzing the trailer for 2017’s Logan, the company’s AI software came up with the following, unhelpful tags: “facial_hair,” “car,” “beard,” and — the most popular category of all — “tree.”) But Queisser says the introduction of this technology is overdue.

“On a film set now, it’s robots, it’s drones, it’s super high-tech, but the business side hasn’t evolved in 20 years.”

“On a film set now, it’s robots, it’s drones, it’s super high-tech, but the business side hasn’t evolved in 20 years,” he says. “People use Excel and Word, fairly simplistic business methods. The data is very siloed, and there’s hardly any analytics.”

That’s why Cinelytic’s key talent comes from outside Hollywood. Queisser used to be in finance, an industry that’s embraced machine learning for everything from high-speed trading to calculating credit risk. His co-founder and CTO, Dev Sen, comes from a similarly tech-heavy background: he used to build risk assessment models for NASA.

“Hundreds of billions of dollars of decisions were based on [Sen’s work],” says Queisser. The implication: surely the film industry can trust him as well.

But are they right to? That’s a harder question to answer. Cinelytic and other companies The Verge spoke to declined to make any predictions about the success of upcoming movies, and academic research on this topic is slim. But ScriptBook did share forecasts it made for movies released in 2017 and 2018, which suggest the company’s algorithms are doing a pretty good job. In a sample of 50 films, including Hereditary, Ready Player One, and A Quiet Place, just under half made a profit, giving the industry a 44 percent accuracy rate. ScriptBook’s algorithms, by comparison, correctly guessed whether a film would make money 86 percent of the time. “So that’s twice the accuracy rate of what the industry achieved,” ScriptBook data scientist Michiel Ruelens tells The Verge.

An academic paper published on this topic in 2016 similarly claimed that reliable predictions about a movie’s profitability can be made using basic information like a film’s themes and stars. But Kang Zhao, who co-authored the paper along with his colleague Michael Lash, cautions that these sorts of statistical approaches have their flaws.

One is that the predictions made by machines are frequently just blindingly obvious. You don’t need a sophisticated and expensive AI software to tell you that a star like Leonardo DiCaprio or Tom Cruise will improve the chances of your film being a hit, for example.

Algorithms are also fundamentally conservative. Because they learn by analyzing what’s worked in the past, they’re unable to account for cultural shifts or changes in taste that will happen in the future. This is a challenge throughout the AI industry, and it can contribute to problems like AI bias. (See, for example, Amazon’s scrapped AI recruiting tool that penalized female candidates because it learned to associate engineering prowess with the job’s current male-dominated intake.)

Because AI learns from past data, it can’t predict future cultural shifts

Zhao offers a more benign example of algorithmic shortsightedness: the 2016 action fantasy film Warcraft, which was based on the MMORPG World of Warcraft. Because such game-to-movie adaptations are rare, he says, it’s difficult to predict how such a film would perform. The film did badly in the US, taking in only $24 million in its opening weekend. But it was a huge hit in China, becoming the highest grossing foreign language film in the country’s history.

Who saw that coming? Not the algorithms.

Warcraft
AI didn’t predict the success of ‘Warcraft.’ (In fairness, neither did the humans.)

There are similar stories in ScriptBook’s predictions for 2017 / 2018 movies. The company’s software correctly greenlit Jordan Peele’s horror hit Get Out, but it underestimated how popular it would be at the box office, predicting $56 million in revenue instead of the actual $176 million it made. The algorithms also rejected The Disaster Artist, the tragicomic story of Tommy Wiseau’s cult classic The Room, starring James Franco. ScriptBook said the film would make just $10 million, but it instead took in $21 million — a modest profit on a $10 million film.

As Zhao puts it: “We are capturing only what can be captured by data.” To account for other nuances (like the way The Disaster Artist traded on the memeiness of The Room), you have to have humans in the loop.

Andrea Scarso, a director at the UK-based Ingenious Group, agrees. His company uses Cinelytic’s software to guide investments it makes in films, and Scarso says the software works best as a supplementary tool.

“Sometimes it validates our thinking, and sometimes it does the opposite.”

“Sometimes it validates our thinking, and sometimes it does the opposite: suggesting something we didn’t consider for a certain type of project,” he tells The Verge. Scarso says that using AI to play around with a film’s blueprint — swapping out actors, upping the budget, and seeing how that affects a film’s performance — “opens up a conversation about different approaches,” but it’s never the final arbiter.

“I don’t think it’s ever changed our mind,” he says of the software. But it has plenty of uses all the same. “You can see how, sometimes, just one or two different elements around the same project could have a massive impact on the commercial performance. Having something like Cinelytic, together with our own analytics, proves that [suggestions] we’re making aren’t just our own mad ideas.”

But if these tools are so useful, why aren’t they more widely used? ScriptBook’s Ruelens suggests one un-Hollywood characteristic might be to blame: bashfulness. People are embarrassed. In an industry where personal charisma, aesthetic taste, and gut instinct count for so much, turning to the cold-blooded calculation of a machine looks like a cry for help or an admission that you lack creativity and don’t care about a project’s artistic value.

Ruelens says ScriptBook’s customers include some of the “biggest Hollywood studios,” but nondisclosure agreements (NDAs) prevent him from naming any. “People don’t want to be associated with these AIs yet because the general consensus is that AI is bad,” says Ruelens. “Everyone wants to use it. They just don’t want us to say that they’re using it.” Queisser says similar agreements stop him from discussing clients, but that current customers include “large indie companies.”

Hollywood is unlikely to accept AI having the final say anytime soon

Some in the business push back against the claim that Hollywood is embracing AI to vet potential films, at least when it comes to actually approving or rejecting a pitch. Alan Xie, CEO of Pilot Movies, a company that offers machine learning analytics to the film industry, tells The Verge that he’s “never spoken to an American studio executive who believes in [AI] script analysis, let alone [has] integrated it into their decision-making process.”

Xie says it’s possible studios simply don’t want to talk about using such software, but he says script analysis, specifically, is an imprecise tool. The amount of marketing spend and social media buzz, he says, are a much more reliable predictor of box office success. “Internally at Pilot, we’ve developed box office forecast models that rely on script features, and they’ve performed substantially worse than models that rely on real-time social media data,” he says.

Despite skepticism about specific applications, the tide might be turning. Ruelens and investment director Scarso say a single factor has convinced Hollywood to stop dismissing big data: Netflix.

The streaming behemoth has always bragged about its data-driven approach to programming. It surveils the actions of millions of subscribers in great detail and knows a surprising amount about them — from which thumbnail will best convince someone to click on a movie to the choices they make in Choose Your Own Adventure-style tales like Black Mirror: Bandersnatch. “We have one big global algorithm, which is super-helpful because it leverages all the tastes of all consumers around the world,” said Netflix’s head of product innovation, Todd Yellin, in 2016.

Netflix regularly changes the thumbnails on TV shows and films to see what appeals to different viewers.

It’s impossible to say whether Netflix’s boasts are justified, but the company claims its recommendation algorithm alone is worth $1 billion a year. (It surely doesn’t hurt that such talk puts fear into the competition.) Combined with its huge investments into original content, it’s enough to make even the most die-hard Hollywood producer reach for a fortifying algorithm.

Ruelens says the transformation has been noticeable. “When we started out four years ago, we had meetings with big companies in Hollywood. They were all super skeptical. They said ‘We have [decades] of expertise in the industry. How can this machine tell us what to do?’” Now, things have changed, he says. The companies did their own validation studies, they waited to see which predictions the software got right, and, slowly, they learned to trust the algorithms.

“They’re starting to accept our technology,” says Ruelens. “It just took time for them to see.”

29 May 05:19

Forget the Rules, Listen to the Data

by community-noreply@hitachivantara.com

 

Rule-based fraud detection software is being replaced or augmented by machine-learning algorithms that do a better job of recognizing fraud patterns that can be correlated across several data sources. DataOps is required to engineer and prepare the data so that the machine learning algorithms can be efficient and effective.

 

Fraud detection software developed in the past have traditionally been based on rules -based models. A 2016 CyberSource report claimed that over 90% of online fraud detection platforms use transaction rules to detect suspicious transactions which are then directed to a human for review. We’ve all received that phone call from our credit card company asking if we made a purchase in some foreign city.

 

This traditional approach of using rules or logic statement to query transactions is still used by many banks and payment gateways today and the bad guys are having a field day. In the past 10 years the incidents of fraud have escalated thanks to new technologies, like mobile, that have been adopted by banks to better serve their customers. These new technologies open up new risks such as phishing, identity theft, card skimming, viruses and Trojans, spyware and adware, social engineering, website cloning and cyber stalking and vishing (If you have a mobile phone, you’ve likely had to contend with the increasing number and sophistication of vishing scams). Criminal gangs use malware and phishing emails as a means to compromise customers’ security and personal details to commit fraud. Fraudsters can easily game a rules-based system. Rule based systems are also prone to false positives which can drive away good customers. Rules based systems become unwieldy as more exceptions and changes are added and are overwhelmed by today’s sheer volume and variety of new data sources.

 

For this reason, many financial institutions are converting their fraud detection systems to machine learning and advanced analytics and letting the data detect fraudulent activity.Today’s analytic tools with modern compute and storage systems can analyze huge volumes of data in real time, integrate and visualize an intricate network of unstructured data and structured data, and generate meaningful insights, and provide real-time fraud detection.

 

However, in the rush to do this, many of these systems have been poorly architected to address the total analytics pipeline. This is where DataOps comes into play. A Big Data Analytics pipeline– from ingestion of data to embedding analytics consists of three steps

 

  1. Data Engineering: The first step is flexible data on-boarding that accelerates time to value. This requires a product that can ETL (Extract Transform Load) the data from the acquisition application which may be a transactional data base or sensor data and load it using a data format that can be processed by an analytics platform. Regulated data also needs to show lineage, a history of where the data came from and what has been done with it. This will require another product for data governance.
  2. Data Preparation: Data integrationthat is intuitive and powerful. Data typically goes through transforms to put it into an appropriate format, this can be called data engineering and preparation. This is colloquially called data wrangling. The data wrangling part requires another set of products.
  3. Analytics: Integrated analytics to drive business insights. This will require analytic products that may be specific to the data scientist or analyst depending on their preference for analytic models and programming languages.

 

A data pipeline that is architected around so many piece parts will be costly, hard to manage and very brittle as data moves from product to product. 

 

Hitachi Vantara’s Pentaho Business Analytics can address DataOps for the entire Big Data Analytics pipeline with one flexible orchestration platform that can integrate different products and enable teams of data scientists, engineers, and analysts to train, tune, test and deploy predictive models.

 

Pentaho is open source-based and has a library of PDI (Pentaho Data Integration) connectors that can ingest structured and unstructured data including MQTT (Message Queue Telemetry Transport) data flows from sensors. A variety of data sources, processing engines, and targets like Spark, Cloudera, Hortonworks, MAPR, Cassandra, GreenPlum, Microsoft and Google Cloud are supported.  It also has a data science pack that allows you to operationalize models trained in Python, Scala, R, Spark, and Weka.  It also supports deep learning through a TensorFlow step.  And since it is open, it can interface with products like Tableau, etc. if they are preferred by the user. Pentaho provides an Intuitive drag-and-drop interface to simplify the creation of analytic data pipelines. For a complete list of the PDI connectors, data sources and targets, languages, and analytics, see the Pentaho Data Sheet.

 

Pentaho enables the DataOps team to streamline the data engineering, data preparation and analytics process and enable more citizen data scientists that Gartner defines in “Citizen Data Science Augments Data Discovery and Simplifies Data Science” . This is a person who creates or generates models that use advanced diagnostic analytics or predictive and prescriptive capabilities, but whose primary job function is outside the field of statistics and analytics. Pentaho’s approach to DataOps has made it easier for non-specialists to create robust analytics data pipelines. It enables analytic and BI tools to extend their reach to incorporate easier accessibility to both data and analytics. Citizen data scientists are “power users” who can perform both simple and moderately sophisticated analytical tasks that would previously have required more expertise. They do not replace the data science experts, as they do not have the specific, advanced data science expertise to do so, but they certainly bring their individual expertise around the business problems and innovations that are relevant.

 

In fraud detection the data and scenarios are changing faster than a rules based system can keep track of, leading to a rise in false positive and false negative rates which is making these systems no longer useful. The increasing volume of data can mire down a rules based system, while machine learning gets smarter as it processes more data.  Machine Learning can solve this problem since it is probabilistic and uses statistical models rather than deterministic rules. The machine learning models need to be trained using historic data. The creation of rules is replaced by the engineering of features which are input variables related to trends in historic data. In a world where data sources, compute platforms, and use cases are changing rapidly, unexpected changes in data structure and semantics (known as data drift) require a DataOps platform like Pentaho Machine Learning Orchestration to ensure the efficiency and effectiveness of Machine learning.

 

You can visit our website for a hands on demo for building a data pipeline with Pentaho and see how easy Pentaho makes it to “listen to the Data.