Shared posts

14 Feb 09:10

Five Brand Community Building Models That Succeed (and why many fail)

by Richard Millington

If you don’t know which model of community you’re building (and the conventions of that model), you’re probably going to struggle.

Community models can be as distinct from one another as Abrahamic religions. We might share a common faith in connecting people, but the best practices of each model vary greatly.

If you follow #CMGR on Twitter, it often feels like community builders live in different worlds sharing conflicting advice and ideas. The truth is most people are sharing advice that works for their model of community building.

Whether it works for you or not depends largely on if you’re following a similar model to them.

 

Community Building Models Which Work in 2022

The main brand (emphasis on the word brand here) community building models today fall within five buckets. Each bucket has unique conventions.

(click here for full image)

For example, if you’re building a support community, creating a sense of community isn’t such a big deal. Most people just want to get answers to their questions and be on their way. However, if you’re building peer groups or user groups, then creating that sense of belonging is far more important (this is the essence of this debate).

For sure, you absolutely can succeed by defying these models and blazing your own trail. But you’re making life much harder for yourself. This is because the success of each model depends upon embracing the conventions of that model.

 

What Are Community Building Conventions?

Like writing a movie, it’s best to pick a genre and stick to the conventions of that genre.

Sure, sometimes you can defy conventions and succeed, but there’s a reason we don’t see many romantic horror movies; audiences have preferences.

This is true for communities too. Conventions of community models reflect the preferences of their audiences. These preferences cover four key areas:

1) Value to members. This is the unique value you need to offer (and promote) to members. Without the right value, members won’t visit.

2) Critical success factors. This is what needs to be present within your community for members to get that value. Without these, members won’t stick around for long.

3) Key challenges. This is the hard part of making that model work. It’s what your strategy should be designed to overcome.

4) Platform. This is the technology members prefer to achieve that outcome (defy this at your own risk).

So, for example:

  • If you’re building a support community, pick a forum platform which is easy to scan and search for information, nurture a group of superusers, and do everything you can to redirect people with questions from other channels to the community.
  • If you want people to share how they use products, embrace blogs, videos, and social media where people already share expertise. Aggregate them into a single place. Prune the bad and feature the best.
  • If you want advocates to say nice things about you to others, use a dedicated advocacy platform and provide rewards advocates actually want. Put strong training and quality controls in place.
  • If you want to nurture peer groups (especially elite peer groups), you need to keep it exclusive, spend time welcoming each person, and invest heavily in great facilitation (rather than moderation!) on a simple group messaging tool.
  • If you want user groups, then find the right leaders, use dedicated (virtual) event software, and create systems to scale it up rapidly.

The more conventions of the model you embrace, the more successful you’re likely to be.

When you defy the models, you’re going to struggle because you’re trying to change the preferences of your audience.

Member preferences are so critically important (and, sadly, so commonly ignored).

 

Let’s Be Honest About Member Preferences

Understanding the real preferences of members will save you a lot of time and money.

Often we do things that are clearly against the preferences of our audiences.

Here’s a couple of questions:

How many brand communities do you regularly participate in by choice (i.e. not because you need a problem fixed?)

When you engage with peers in the industry, how often do you do it in forums hosted by a major brand?

How often do you contribute articles to knowledge bases in a brand community?

If you answered ‘none/never’ to all of the above, why would you think your audience is any different?

If you’re trying to persuade your members to do something you’ve never done (or wouldn’t do), you’re probably not being honest about member preferences (which usually aren’t so dissimilar from your own).

Too often we try to build communities that are clearly running contrary to the preferences of members. This means we’re going to end up fighting against the odds.

 

Different Audiences Have Different Conventions Too!

To make matters a little more confusing, sometimes there are unique conventions for specific audiences too.

One recent prospect complained that since they moved their developer community from Discourse to their integrated Salesforce platform, participation had plummeted. Developers had simply begun using a member-hosted Slack channel instead.

Why did this happen?

Because Discourse is better for developers, it has features developers like and are more familiar with. It’s a widely accepted convention that developers use Discourse. When you battle natural preferences, you’re usually going to lose.

Likewise, I recently talked a game developer out of launching a forum for gamers to hang out and chat. That’s simply not where gamers go anymore. They prefer Discord, Reddit and other channels.

It doesn’t take much research to identify the conventions you’re working with.

You can battle against user preferences if you want, but it is going to be a battle.

 

If You Don’t Embrace The Right Conventions, You’re Probably Fighting The Odds

Sometimes you see the same mistakes so often you start to wonder why no-one seems to learn from them?

For example, around 95% of groups hosted within hosted brand communities are devoid of any activity.

This means your odds of making sub-groups work within your hosted community are around 5%.

Unless you’re playing the lottery, those are terrible odds.

But that doesn’t stop community after community trying to make it work. Some don’t know the odds, others simply think that their situation is different.

And you might well be different. You might just have the audience and unique circumstance to make it work for you, but you’re always going to be fighting the odds.

This is the crux of the problem:

Instead of embracing the natural preferences and behaviors of our audiences, we try to persuade them to change their preferences.

Once you start trying to change preferences of members, you’re stacking the odds against you.

Sadly, many people seem to have a completely false idea of the odds against some of their community plans.

 

Five Reasons We Don’t Fully Appreciate The Odds Of Success

This isn’t a comprehensive list, but in my experience there tend to be five common reasons we don’t fully understand the odds of success in a community activity (or don’t understand the preferences of members).

1) Success bias. Failures are naturally removed from the web. This only leaves successful examples. Yet for every success there might be dozens of failures. The reverse is also true. Often a single successful example inspires dozens of failures.

2) Imagining people who don’t really exist. We imagine there are time-rich people unlike ourselves who are happy to take the time to learn new tools, write long knowledge articles, and share their success stories with others. If you can’t find an abundance of real people with preferences to match your idea, you need to change your idea (rather than try to change the people!).

3) Preferences change. There was a time when people happily hung out in forums chatting to friends. For some older generations, this may still be true. But I’m betting few (if any) of us still do it today. Preferences have changed and we haven’t stayed current with the modern preferences of our audiences.

4) Software vendors promote their solutions for every situation. There is a big difference between a platform offering a feature and members wanting to use the feature (on that platform). In theory, groups, knowledge bases, and gamification can all be really useful, but most of the time members won’t use (or care about) any of these features. If you haven’t used any of these features on another brand’s community before, why do you think your members will?

5) The desire to ‘Do Something’. A new VP of marketing arrives, sees a collection of disparate platforms, and decides this is confusing, wasteful, and cluttered. She decides to do something and suggests a far better solution; bring everything together into one integrated experience. But this runs contrary to the preferences of members as we’ll explore in a moment.

Each of these really skews the real odds of success when you run against member preferences.

You can’t resist them entirely, but try to be mindful of their impact.

 

Can You Embrace Multiple Community Building Models At The Same Time?

Yes, but you have to embrace all the conventions (and challenges) of each model!

You can’t just combine them together and hope it works.

For example, many support communities (people asking/answering questions) also try to be a ‘success’ community (where people proactively share best practices).

Typically, this is a struggle. Preferences today suggest people don’t go to forums to learn. In most fields today, people share knowledge and learn directly from blogs, video sharing sites, and other social media tools.

You become a ‘success’ community by either finding members willing to submit this kind of content into your community (difficult) or aggregating and filtering the content already out there (a lot easier). Patagonia and Peleton are good examples of this.

Likewise, if you want to build private peer groups where people hang out and chat, that’s not likely to happen within a hosted community platform. You need to use platforms where people are most likely to hang out and chat. That’s usually group messaging apps (Slack, WhatsApp groups, even email works well at small scale). You need to keep each small, private, and exclusive (i.e. don’t invite everyone). You need to link these to the community (and vice versa).

This makes measurement difficult.

But, believe me, it’s far better to make life difficult for your metrics than your members.

 

Some Simple Questions To Guide The Model You Use

FeverBee invests a ridiculous amount of time researching and understanding the preferences of our clients’ audiences.

We do this because it’s a lot easier to embrace preferences than try to change them.

If we embrace the preferences, we stack the odds of success in our favour.

Sadly, so many strategies today are built upon wishful thinking rather than preferences identified by members. This usually reflects a failure to take a data-driven approach and not spending anywhere near enough time really understanding the audience today.

In fact, just by asking a handful of simple questions (shown below) you can uncover a great deal of member preferences.

(click here for full image)

This isn’t a comprehensive list but you get the idea.

Begin the process with a blank slate and a completely open mind and you’re far more likely to find the right model and platform for you.

 

Summary

The model of community you’re developing influences everything you do.

It highlights the value you should communicate to members, the key metrics you need to track, the major challenges you should overcome, and the platform you select.

If your community isn’t where you want it to be today, there’s a very strong chance you either have the wrong model (or you’re not embracing the conventions of that model).

I’d suggest a few key steps here:

1) Use the template questions above to identify value and platform preferences. You will likely find they fall the platform preferences match the model.

2) Get everyone aligned on the model. Everyone involved in building the community needs to understand the model you’re using and the conventions of that model. Defy the conventions at your peril.

3) Develop your strategy to achieve the critical success factors of the model. Align everything you do to get those critical success factors in place. Use the resources and strengths you have to overcome the challenges of each model.

Good luck!

The post Five Brand Community Building Models That Succeed (and why many fail) first appeared on FeverBee.

02 Feb 07:46

Learning: Knowledge, Meaning, Capability

by julianstodd

Principles of #WorkingOutLoud allow us to revisit topics with which we feel familiar and comfortable – to question ourselves, to explore new facets, to ask ourselves questions, or address the practical things we have learnt whilst ‘in practice’. Today i am reflecting on aspects of learning design, relating to knowledge, the creation (and co-creation) of meaning, and the building of capability. With that in mind, a quick reminder of spaces: the blog is my first reflective space – so i feel no obligation to make this exploration complete or even fully coherent – if you want my ‘well considered’ work in this area, the ‘Social Learning Guidebook’ is the best place to start.

To the matter in hand: to what extent should we ‘teach’ or convey knowledge, to what extent do we create space to discover, and what, precisely, is it that we discover? Let’s start with some pragmatic definitions: considering ‘knowledge’ as discrete ‘truths’ which have been discovered, and validated, considering ‘meaning’ as the conceptual understanding we create, and considering ‘capability’ as the things we can do.

Let me try to illustrate that: i’m sat in a coffee shop – if the coffee machine breaks down, the quickest way to fix it will be through established knowledge – probably the manufacturer wrote a book of this, and people are trained. If that fails, the operators who have to deal with it’s quirks everyday may have some ideas too. A last resort is for me to fix it: i only have theoretical knowledge, no practical experience whatsoever.

Fixing this coffee machine can be achieved in a number of ways: through that formal codified knowledge, through experiential tacit/tribal knowledge, or, at a push, inference from theoretical knowledge, transposed from other fields (e.g. i know about pressure, about oils, organic chemistry etc).

Codified knowledge may be global: the same knowledge may fix this thing in Singapore or Seattle. Tacit/tribal knowledge may be variable: in Seoul a barista may know how to hit the machine on the side in just the right place – something explicitly forbidden by the manufacturer, but workable in practice. Tacit knowledge tends to be hyper local, and hidden, either accidentally, or by design.

Formal training tends to circulate formal knowledge: Social Learning may surface tacit and tribal knowledge, if we earn it, if we create the circumstance.

Now: in this sense, formal knowledge relates pretty closely to structural capability. If i run 3,000 coffee shops around the world, a strong programme of formal knowledge ‘teaching’, or mechanisms, technologies, and structures, of knowledge access, sounds like a good start.

Indeed, we could go further: if we use Social Learning approaches to ‘discover’ or mine the social knowledge, we can categorise it, into that which should be codified, extending our formal knowledge, or that which should be banned (e.g. it turns out that hitting the machine fixes this issue, but breaks the filter more often, overall increasing maintenance costs. This leaves us a secondary issue to ‘train out’, or address the ‘helpful’ local solution (which may be tied into the fact that the local barista may have unearthed a mechanical weakness in the overall system, but has no voice to be heard – it’s an open loop where feedback and potential learning is lost).

Taken as a whole, this rough example illustrates some key concepts: formal knowledge is owned and controlled, hence easy to ‘give’ to people, easy to ‘test’ for retention, even for application. But not necessarily ‘good’ or ‘right’. Scaffolded Social Learning approaches may enable us to enhance or amend this, or loop back to rectifying behaviour, or addressing local needs or issues. And, between these two things, capability may be built.

But coffee machines are one thing: what about e.g. leadership, or even something like ‘sales training’, or ‘strategy’. What about ‘mindfulness’ (if it’s really a thing), or ‘ethics’. Social skills, behaviour, things that are subjective, personal, practice based. Things that are contextual, possibly that do not have a global truth or reach. Probably, in fact, most of what we need to ‘perform’.

A reminder: these are very much a part of capably, but are not traditionally ‘teachable’. But nor are they innate. These are developed truths (philosophers and scientists may both throw up their hands at this) e.g. i find my truth, and you find yours. And then we ‘perform’.

Strengths and weaknesses: highly divergent (although possibly within frameworks or a matrix, if we can discern it), developed over time, not instantly, hard to measure, hard to spread, hard to control etc. But grounded in practice, often effective. Hard to share. This is a common truth: you and i may both ‘perform’ well, but by entirely different means (externally) and within entirely different cognitive and conceptual frameworks (internally).

Social Learning schemas may help us here, with caution: specifically narrative, storytelling approaches, whereby we can share something of that internal framework – because our individual learning may rely on the fracturing and reforming – constant iterative cycles of this – to change our conceptual frame of operation. I’ve written more about this process here.

Whilst Organisations are currently finding it useful to talk about Learning Science, we also have to remember that instructional design happens within a broader social context too: what learners expect.

It’s easy to dismiss that, but in reality it’s a dominant force. If learners expect to be told, and we ask them to explore, we may get lucky, and unleash them, or we may get unlucky, and confuse, annoy, or lose them altogether. I’ve had all three experiences in the last month alone.

It’s not that we need to ‘meet’ expectation, but we must defy it carefully.

Take it back to ‘knowledge’, ‘meaning’, and ‘capability’ – a foundational question we must ask is which one of these we want? Do we just need people to perform within an agreed matrix of capability, and do not care if they ‘believe’ it, or do we need people to construct meaning (from which they may build capability, if we support it), or do we just want to give them knowledge?

And does there always have to be knowledge? I’m struggling with this in a programme right now: i think it need largely to be about ‘meaning’, but i feel obliged to convey knowledge – probably partly to prove my worth, if nothing else. Because we are, of course, bound into the system ourselves.

Perhaps one simple answer is that we need everything: we need to teach, and we need to explore (in case it was not clear, i use ‘explore’ to describe this more socially collaborative model – where we focus on finding meaning).

But what if we cannot do everything – or if people do not want everything?

There is an additional tension to navigate: for global Organisations, the maintenance, validation, and support of, learning, is a long tail. Codified learning tends to have high up front costs, but it persists over time (usually for way too long). Social Learning, perversely in some ways, also has high up front costs – but not in the creation of assets – instead i think design itself is high – and so is facilitation, measurement, and validation, although emergent technologies are potentially impacting on this.

Specifically: we need some aspects of moderation (in some, not all, cases). And we need active connection and interconnection – essentially to find meta narratives and ‘truths’ at global levels. So e.g trends, sentiments, meta-analysis of narratives etc.

There is a risk that we are currently seeing the application of technology to Social Learning from the perspective of formalisation and scale – which are inherently unsocial – so we need to keep a close view on this.

Sharing this reflection as part of my broader work, through last year, and into this one, as i graffiti and rework my core pedagogical theory and work.

15 Dec 07:02

Quality blended learning systems for improving undergraduate students’ skills

by Ifeanyi Benedict Ohanu
Quality blended learning systems for improving undergraduate students’ skills
Ifeanyi Benedict Ohanu, Taiwo Olabanji Shodipe, Chinenye Maria-Goretti Ohanu, Josephine E. Anene-Okeakwa
Quality Assurance in Education, Vol. 30, No. 2, pp.169-183

This study aims to investigate the effects of quality blended learning systems (QBLS) on the improvement of undergraduate students’ skills through the use of the Technology Acceptance Model (TAM) and the Theory of Planned Behaviour (TPB) model.

The study sample includes 1,200 subjects of which 126 and 1,074 are lecturers and students, respectively. The subjects were selected from seven post-secondary institutions in Nigeria. A stratified sampling technique was used in data collection. Collected data were analysed using exploratory and confirmatory factor analyses.

The results reveal that QBLS influences the perceived ease of use, perceived usefulness, attitude, subjective norms and perceived behavioural control towards intentions that culminate in the usage of blended learning tools (BLTs).

The study supports the belief that QBLS enhances users’ behaviour towards system usage with the TAM and the TPB predicting real usage of BLTs via users’ intentions. Practically, teachers should adjust the existing BLTs not only to create new ones but also to suit personalized teaching and learning activities.

10 Dec 07:21

Degreed vs EdCast TXP vs Fuse – Battle 2.0

by Craig Weiss

Three systems. One blog post. A repeat, in the sense, that a year ago, the debut battle between these vendors, occurred. It was and still is, one of the most popular posts, and since the systems have changed, it is only fitting to bring about a second battle to compare each vendor to one another.

Some Ground Rules

  • 100% independent – No pay for play here
  • The data that will be presented, the features, the comparison analysis – is relatively fresh.  As of the end of November 2021.
  • The screenshots are the latest.
  • I have seen demos, in-depth, from each vendor multiple times in 2021. The latest viewing? November 2021.  Each vendor has provided me additional information as they added items – for 2022, or have on their roadmap.
  • One feature I would love to show you is Guidebooks, which I think is the breakout feature for 2021, which appears in Degreed Intelligence.  Degreed requested that the Guidebooks not be shown, nor any screens from Degreed Intelligence, with the exception of Skills Coach.  I have honored that request.
  • On the other hand, the second big breakout feature which I see as truly human-centric will be discussed, but it is hard to show, because it taps into the “search” angle, plus in 2022, they are taking it another step, which I will note – and I believe will be a game-changer in the learning system industry. This capability is in the Fuse Platform

This comparison is system to system to system, based on capabilities, functionalities, system pros/cons (when applicable) – learner and administrator side, metrics too.   Client size wasn’t relevant for this analysis, nor who they focus on in terms of verticals/industries.  Two of the systems are really TXPs, although one pitches itself as an upskilling platform, the other as either an LXP or TXP.  Both have certain functionality that you see in an LMS, which is very common these days, due to the ubiquity of the space between LXP and LMS.  They are Degreed and EdCast TXP.

I will add that what an LXP actually is, and how vendors even within the LXP market are pushing it as, are two different items, that will not be discussed here, but just an FYI.

The third system is a learning ecosystem. Some folks call them a learning knowledge platform or system.  In the end, it is part LMS, part LXP, part Skills Platform, part engagement, and part human-centric.  This is Fuse.

Each one is a learning system at the end of the day.  Each will be in the top three learning systems for 2022.

Let the Battle Begin

Learner Side

The best way to present the various learner side is to show it off.

Fuse

Fuse first and foremost ties everything around communities, which is where a set of learners are placed, depending on what the client, wants. For example, by location, or job role or skills or a combination or by interest, and so forth.  A learner can be in multiple communities.

The communities are essential. Content/Courses are tied into the communities, which you can even segment further down – so x learners in community Y see only these courses/content, while y learners in community Y see something else.  The same applies to comments, knowledge sharing, etc.

This approach is not for everyone, and thus to try to compare them without fully understanding the approach and how it really works, to say Degreed or EdCast TXP, isn’t a fair representation.

Once you understand the methodology, the angle, and why it appears as it does, with higher engagement based on various factors, then it makes a tad more sense. Again, though, it isn’t a fit for everyone, because it doesn’t look like every system.

Fuse presents a number of views (the eyeball), which I really love.  It is good to see how many people viewed it. Likes appear – that is common, but where are the dislikes?  Comments -the number is shown, and then a click and on the same screen you see the comments.  I’d like to see a rating score right there, next to the views, and likes.

You can view content, most recent, recommended, and other options, including the type of content – a nice plus.

Fuse uses a machine-learning algorithm (referred to as A.I.) throughout their system, which yes, EdCast TXP and Degreed do as well.  But Fuse doesn’t make you complete the content/course to see what is recommended.  Degreed does.  This is a strength – i.e. Fuse, and EdCast TXP does the same.  Think this way, recommended is only as good as what someone is doing, or seeing or viewing – which can be narrowed down.  If I have to click “complete” – which is how Degreed’s system works, then honestly, how accurate will recommend really be?

Back to Fuse.

The UI is modern, with a lot of options, and can appear different for each client, as the client has a lot of choices for design and experience.  That said, I believe it needs a refresh, which is on their roadmap for 2022.

The Fuse Catalog

The catalog follows a very familiar route using filters. I am a huge fan of filters. In the screen below, the new human-centric, human element to the Fuse platform is front and visible.  I typed in “Popular videos on engagement by Steve Dineen” – and output appears.  You can search by popularity and latest, just to name two.

The date is always visible, duration, number of views (the eye), likes, comments, and share.  I’d love to see those ratings again (search by ratings is on their roadmap), and dislikes too.

I mention the human element aspect because Fuse is the only vendor in the space today, where you can type in phrases, similar to what you would do on a search engine and the content appears. As the screen below shows, a specific inquiry – something someone might type in a search engine generates the right information.

I wasn’t limited to a tag or keyword, which plenty of systems offers. In a test, I typed in bad smells from the kitchen – and content appeared showing, what would cause that.  I was very impressed with that search capability, especially if I needed to solve a problem in my kitchen (oh, and taps into going beyond just business-focused or skill-focused only, another plus).

Coming in early 2022, they will launch the ability to search by a question, “What do I need to be a leader?” – for example, and any content that is free – whether by the end-user OR as the client, you purchased a 3rd party publisher or two, then that content/courses appear.   This latter search capability, no one is doing yet, which is why I believe it will be a game-changer – we all want the human element to systems, which are lacking. Human-centric is a must, IMO, and systems as a whole, feel like I am walking into a museum – look here, look there, oh be personable and warm? Nope.

EdCast TXP

This is the home learner page for EdCast TXP.  If you have purchased “Career Pathing aka Career Mapping”, then it will appear on the top of your home screen. If not, it will not appear.  Ignore the word”Spark” it is their SMB solution, but the UI/UX for EdCast TXP is the same, hence the screen.

 

 

 

 

 

 

Personally, I am not a fan of “my assignments”, because of the system which pushes the whole courses/content tied around skills aspect (which does exist), why would assignments – which is means required to be an essential need there.   It is true you can remove that section/area, but it is defaulted on. The UI/UX though is modern and fresh.  Recommendations is based on what you are currently taking, or completed – that is nice because in the case of Degreed, completion is what recommendations are based on – not taking it, or looking at it – completion only.

That isn’t the case with EdCast TXP.  The recommendations use the machine learning algorithm (which all vendors refer to as A.I.). I like that it shows whether the content is user-generated (another learner) or from a specific source.  A cool option, which does not exist, but would be a nice win, is if the name of the person is mentioned, beyond user-generated. Perhaps I want to follow that person and tie skill coins to that.  Then on the metrics side, an administrator could see how many people follow that end-user content, what topics, and so forth.  Lots of segmentation there, which would help anyone in L&D or Training.

Getting back to what is in the system.  I’m not sold on the “Free” statement under each piece of content that is found.  I say this, because if the client doesn’t purchase any 3rd party off-the-shelf courses/content (all are fee-based), then what is the value of saying “free”?

The only angle for “free” is if you are selling your courses/content and the learner has that appear in their learner home page, but again, that seems odd.  It is more likely that they would see the content/course fees in the catalog, then buy it, and then it appears on the learner’s home page.

I do like that the type of content is presented, but would love to see some type of icon of what it represents. If it is a PDF, it says PDF, it doesn’t show the universal icon of a PDF.  There is a reason why people in L&D and Training, when writing training guides, work instructions, and so forth, always include an icon, that is recognizable and means X.  Like a light bulb -means tip or information.  A red stop sign? Stop.

They are not the only vendor who doesn’t do this, but to me, it is a missed opportunity.  The thumbs-up is nice, but what about the thumbs-down?  Again, EdCast TXP is not the only vendor who shows only thumbs-up, and not the other option.  Likes again – good. Unlikes?  You can leave comments, but they appear on another screen, and the share function, is slick.  There isn’t any way to see the number of people who looked at it though, or rate it – would be a better word.

Discover is well, a way to discover next-generation skills, a variety of academies – which you can follow, different types of courses/content, which usually is premium (i.e. it is not free) and states “follows”, even SMEs/Influencers which you can follow too.  On a separate note, I never understand why some Influencers/SMEs do not follow others.  Sort of defeats the purpose of shared learning.

Anyway, the two issues I have with the discover screen is seen below

To actually go to the course/content, you have to click the green box, or if it has an image or whatever. If you click “Follow”, congrats you are following it, and that’s it.

When you click the box though, you could see “pathways”, “Smart Cards”, and/or “Courses” or may see only “Courses”.  You do have a variety of options here, again a plus.  It is when you click the courses, you will see either “free” or “paid” – which means it costs money. Paid is past tense, so I think showing the cost would make more sense.  I have the GO1 license, so any publisher in the GO1 library, which is massive is listed as “free”.  The same will apply if you have purchased Udemy or LinkedIn.

Again, the purchasing of content/courses is common with 3rd party publishers, and most nowadays including GO1, smaller publishers, and bigger ones such as Open Sesame, have an “all you can consume model”, which the majority of people/clients purchase.

Back to EdCast TXP

I love the Smart Cards, which can be content from someone you follow, or something out of one of those libraries you purchased, or again “Free”.  Oh, and yes, you can see smart cards you have to pay for.  That said, the likes, share, comments are doable, and the type including “interactive” is listed. A nice plus.  Many systems, like LinkedIn Learning, lists video and courses – which are videos and is a peeve of mine.  That isn’t the case with EdCast TXP.  A video is a video. A course is a course, which may have a video in it or not.

Skill coins are sort of virtual credits, which you can use to purchase 3rd party content yourself or purchase other items or use for whatever.  It’s a nice feature, but a reward store is a must for “credits”, and EdCast TXP lacks that.

Degreed

Learner home page

Totally not a fan of assignments – sort of gets back on low usage, if all you are doing is having to do assigned work. Anyway, the resources area provides a quick way to get to the content/information. Monthly Activity is nice, but again, completions is driven hard here, yet the whole LXP angle has always been learner-centric, informal.  Guess those days are gone, when learner-centric means required, and informal means, uh formal with a twist.   Degreed isn’t the only vendor who went this route in the LXP space – they all did – i.e. all LXPs. 

 

 

 

 

 

 

 

 

 

I like the “Connect with a Mentor”, but I wished I could actually see the person and talk in a video-to-video angle, which you can’t.  Nevertheless, I found it interesting that they show the number of skills they have, although I wish I could hover over and see what those skills were, OR here are how they rate with “focus skills”.   The analytics btw, do not have any mentor data around how good or awful these folks are – I think it would be a plus to know – allow your learners to rate them, and then have the rating below the skills. I  mean, sure they may have the skills, but they have no personality and treat you like a sub-human.  Five out of five!

The Catalog 

  • You can search for pathways, courses, articles, podcasts, and other content
  • The catalog uses filters – the options include Type, Provider, Duration
  • If you have purchased the add-on called I believe, career mapping – it is shown as opportunities, then the opportunities will appear next to the key tag word you are searching for.  In opportunities, you will see the number of skills you need, and how many you have for that opportunity.  To learn more about the opportunity, you just click.
  • To see who the 3rd party publishers are, they are visible as icon badges – with their logo.   This appears on the catalog page, along with pathways.

Here is a screenshot of content you have saved for future viewing/access.  I like the type – but similar to many others in the industry it lacks an icon to match with it.  Duration is always a misnomer, but every vendor does it. What takes you 15 minutes, may take me two hours. Plus, the whole purpose is to focus on what you want, when you want to know it.  If the content is from a 3rd party publisher, you will see their name next to duration.  The top of the screen shows “Assigned”, content that is assigned to you, shared, saved, pathways and plans.

Here is the plans screen

And finally, the pathways screen – think sort of a learning path if you will, with content under the said pathway.

How do Fuse, EdCast TXP, and Degreed stack up from learning environment functionality?

Based on my Learning Systems Template of 24 features within the learning environment

Degreed 20/24, with the two big misses (i.e. they do not have it), plus a half miss.

  • A customizable home page that is a different look/appearance/theme based on the learner who logged into the system
  • Search by ratings
  • Repeatable creation of events based on a standard course template (e.g. number of days, min/max seats, title, description) – this is the half miss because Degreed does not have any event management features/functionality. Nor do they have classroom management or vILT functionality either.

EdCast TXP 21/24, however, the three missing items are all on their roadmap for 2022

Two stood out as missing:

  • Search by ratings
  • Learner progress bar or similar

Fuse 23/24, with the one item missing – on the roadmap for 2022

  • Search by ratings

Who Scored High in other key areas and who didn’t?

Compliance

  1. Fuse, 12/13 features, with digital signature on their mobile app coming in 2022 (this is the only one missing)
  2. EdCast TXP, 9/13, with the remaining items on the roadmap. This includes a digital signature on their mobile app, coming in 2022, and workflow features.
  3. Degreed 5/13, items that are missing include diagnostic tools to identify compliance and competence gaps and recommend or assign appropriate learning, workflow features, digital signature in the system, and yes on mobile.

I will add that the mobile digital signature is very new to the industry, so it isn’t a surprise that all of them are missing it.

Machine learning (aka AI in the industry), Playlists, and Content Curation –  31 features via my learning systems template

Let’s take a look at an apple to apple comparison for machine learning

Functionality Fuse Degreed EdCast TXP
The system uses an algorithm Yes Yes Yes
 AI in the system can scan documents, courses, content, audio and video files and produce text results in a transcript or similar items Yes No Yes
Learners are not penalized/nor weighted, for not completing a course, tied to the algorithm Yes Yes Yes
Recommends courses/content based on job role, skill, and/or additional variables Yes Yes Yes
Can create a learning path based upon recommendations using algorithm over a period of time Yes Yes Yes
Recommends courses based on previous courses in progress/completed Yes Yes Yes
Recommends content/documents/videos/etc. based on in progress/completed courses or content Yes Yes Yes
Ability to include other items to enhance recommendation of courses/content Yes Yes Yes
The administrator can change weights, points, and other items to assist in the deep learning process – more accurate info Yes No Yes

For Playlists and Content Curation, the total score is 21

  1. EdCast TXP 21/21 and Degreed 21/21
  2. Fuse 20/21

Digital Coaching is getting hot, let’s take a look at how Fuse vs EdCast TXP vs Degreed compare

Total possible Score – 13

  1. Fuse 10/13 and EdCast TXP 10/13
  2. Degreed 4/13 – Definitely a weakness

Notifications

Are a must with any learning system, very few have SMS notifications, yet, in the group below, two offer it.

Functionality Fuse Degreed EdCast TXP
Upon registration of courses, events (webinars/seminars, etc.) Yes Yes Yes
Automatic and customizable email notifications Yes Yes Yes
Schedule notifications (hourly, daily, weekly, monthly) Yes Yes Yes
Auto notification when CEU/CPD needs to be completed, before the expiration Yes No Yes
E-mail notifications when a required course/content is about to expire Yes Yes On Roadmap
Auto reminders via e-mail Yes Yes On Roadmap
Future Tech  – Ability to send (SMS) – text messaging Yes No Yes
E-mail templates Yes Yes Yes

Mobile First – You hear it all the time, wait, I do…   Possible Score – 14

  1. Fuse 13/14 – the only item missing is admin functionality in the mobile app (it is rare to have it, so no surprise here).  They have a mobile app for iOS and Google Play, and have on/off synch – a definite must!
  2. Degreed 11/14 – Admin and instructor capabilities are missing, no surprise on either – since admin is rare, and Degreed doesn’t offer any functionality around instructors to begin with, let alone vILT, classroom management.   The other miss, is rare to begin with, so not a problem there.
  3. EdCast TXP – 9/14 – All the missing items are on the roadmap, biggest surprise – lack of on/off synch.  Bummer.

Skills Capabilities – This is based on my entire skills tab functionality

  1. EdCast TXP – 77%
  2. Degreed – 72%
  3. Fuse – 66%

Generally speaking, all good scores, because skill capabilities/functionality as a whole in the entire learning system industry is still at the infant stage.  Fuse is going full throttle with skills capabilities in 2022. If they hit their targets, they will bounce into the 80s.  For Degreed the big area that is “on the roadmap” with capabilities is under the Skills Development area, with five items.  EdCast TXP, it is digital coaching and skills validation that is on the roadmap.

The majority of the learning system space has skills ratings. How do these three stack up against one another?  I should note that Degreed was the very first vendor to have skill ratings with the explanation and 1-5 angle.

Functionality Degreed EdCast TXP Fuse
The learner can self-validate themselves on a skill or skills Yes Yes Yes
Manager can provide a skill-rating validation of an employee (who has completed their own skill validation rating) Yes Yes Yes
The system identifies what each skill rating represents (1 to 5 scale) Yes Yes On Roadmap
The administrator can edit/delete what each skill rating represents No Yes On Roadmap
The system provides proficiency details for each skill rating identified 1-5, thus someone who is 1 means they are proficient at only this and so forth Yes Yes On Roadmap
Administrator can edit/delete/modify proficiencies descriptions No Yes Yes
An analytics section specifically around skill ratings validations.  Includes a comparison total score by each learner, manager comparison for each learner, skills proficiencies, skill strengths, and areas to improve with score rating. Yes Yes Yes

Overall, very impressive.

Administration Side

A learning system, regardless of the type, is only as good as the administration side, yet so many folks, focus on learning first. Understandable, but uh, who is in the system the most?

Again, this is based on my Learning Systems Template, available for download (FREE)

Total Possible Score – 32, includes functionality side and skills capabilities on the administration side. You will see a split – first half – based on the functionality tab on my Learning Systems Template; 2nd half is based on the skills tab on my LST.

  1. Fuse 31/32 – 97%  (the feature they are missing is on their roadmap for 2022 – and is rare in the industry to have it, in 2021 anyway).
  2. EdCast TXP 27/32 – 84%
  3. Degreed – 27/32 – 84%

If I remove the drag/drop on the admin side for UX – which I believe systems should offer, and some do – it is not universal though,  Degreed would move up to 28/31, EdCast the same, 28/31.  Fuse has drag and drop.

LXP/TXP will often note that they are not an LMS, which is okay, that’s fine, but when you start offering manager features, and instructor and go above 90% on the functionality admin side, then…

EdCast TXP is working on manager-specific functionality (again view my template),  Degreed doesn’t have it, Fuse, of course, do – then again, they are not only LXP here.

In the early days of the LXP space, nearly every LXP was missing one feature, the one below:

Allow administrators to set multiple levels of approval (e.g., no approval needed, supervisor approval, instructor approval, etc.)

And, guess what?  Degreed is missing it, and EdCast TXP has it on their “roadmap” – so, uh right now, they are missing it too.

For those playing along, and wondering – Fuse, yes, they have it.

Metrics

Again, Degreed asked me to withhold their metrics, but I have written about Degreed Intelligence (add-on cost), and how the articles/books/viewing metrics still are bar graphs/pie charts  (they appear in some of the Degreed Intelligence screens too), but then the other side, with top content (by provider) viewed, and top 10 topics – is a winner, and I wish every system has it.  Plus the other side has a better data visualization.   Degreed Intelligence includes Skills Coach, which makes no sense because it isn’t really metrics in the way the other two modules are geared for and are – that said, here are a couple of screens showing it off – what it presents is nice, but does it really need to be part of an add-on?

And the Analytics is presents  – Which I like – big fan of spider webs unless they are on my chair, then, no..just no..

Fuse

Best metrics in the industry, period. Oh, and it is included at no additional charge, Let’s check out some screens:

Wait, there’s more

There is even more, but I only wanted two screens.  Trust me, it is the best in the industry. I cannot stress that enough.  Better yet – as if it could be – they are going extra level with even more advanced analytics tapped into the skills and content intertwined in 2022.

EdCast TXP

Ever since they included EdGraph as part of their system, it is an additional cost, add-on), unfortunately. However, it has a lot of pop with very specific metrics, ideal for the segmentation of data.

 

 

 

 

 

 

Another new feature on the data side of the house is Skills Studio, pretty cool once you learn how to use it – a small learning curve is in order here. This is where an admin can do this themselves, without having to get someone from IT involved (yuck). What’s the old joke – wanna have your system ruined? Call IT.  Snark. Snark. 

 

 

 

 

 

 

The Battle has been fought, the results are in. 

Depending on your take, you well may have your own take on which one won it all. 

For me, I see a clear winner,

Maybe you see them as well. 

Let’s find out together.

Actually, for 2022, tied at #2 is Degreed and EdCast TXP.

The number one learning system for 2022 is Fuse.

E-Learning 24/7 

10 Dec 07:21

Microsoft uncovers Windows 11 – these are the new features

by Ella Murphy

New user interface

The most obvious change is the new taskbar, which now displays pinned program icons centered at the bottom of the screen by default, just like on the Mac. But that’s it for the similarities. The new start button opens an entirely new floating start menu, which is also positioned centrally. The live tiles introduced with Windows 8 are history.

Microsoft uncovers Windows 11 – these are the new features

New functions

For years, one of Windows’ strengths has been its flexible window management. However, the possibility of pinning apps and windows to the edges or corners of the screen with the mouse or Windows and arrow keys is overlooked by many. This is probably why Microsoft is now making the practical window management more obvious.

Arrange windows quickly

Windows can now be pinned or quickly arranged at the edge of the screen with a click on the maximise button.

As soon as the mouse is placed over the maximise button of windows, a selection of various positioning options appears. For example, three windows can be quickly placed next to each other on a large widescreen monitor and Windows 11 will remember this window grouping.

Windows 11 will have optimised touch operation with a new virtual keyboard, an improved dark mode, new sound effects and a new start-up sound. Virtual desktops can be managed more easily and apparently Skype is flying out. Instead, Microsoft Teams and a new Xbox app will probably be integrated into Windows. With the Xbox app, Microsoft probably wants to promote its Xbox Game Pass, a subscription service for PC and console games.

Important new features for gamers that may also be shown today are Auto HDR and DirectStorage. Enabling Auto HDR adds High Dynamic Range (HDR) to newer PC games, provided the gamer has a compatible HDR monitor. DirectStorage is said to massively speed up loading times.

Microsoft will hold a second Windows event for software developers after the presentation for media and the general public. At this event, Microsoft could explain its plans for the new Windows Store (officially Microsoft Store). The tech blog Windows Central reported in April that Microsoft was planning to release a revamped app store this year that would not only be clearer and faster, but above all more open to all kinds of apps and games. The new Windows Store is supposed to pave the way for developers to bring any Windows application into the store, including browsers like Chrome or Firefox.

Microsoft even plans to allow app providers their own sales platforms or payment solutions. The latter would mean that app developers could avoid Microsoft’s store fees in the future. Software providers such as Adobe have so far had no reason to offer their subscriptions for Photoshop and Co. via the Microsoft Store, but that could change in the future. Microsoft CEO Satya Nadella recently promised app developers “the most innovative and open platform to develop and monetise apps”.

Microsoft uncovers Windows 11 – these are the new features

New taskbar and new start menu

On the upper half of the start menu are the pinned apps, which are arranged in rows similar to a home screen on a smartphone. A click on “all apps” opens the alphabetical list of all installed apps. Frequently used apps can be pinned in the start menu via this app list. If there is not enough space for all pinned apps, the other apps appear when you scroll down in the start menu.
In the lower half of the start menu, Windows 11 always shows the most recently opened apps and documents. Finally, at the very bottom, on the right of the start menu, is the button for shutting down or restarting.

Was everything better in the past? If you don’t like the centred arrangement, you can get help: Alternatively, the start button and the apps in the taskbar can be displayed on the left. This can be done with a click in the settings, but the live tiles are not returned.

The fundamentally revised start menu is probably one of the reasons why Microsoft is making the leap to version 11.

Picture from https://www.windowscentral.com/windows-11-new-start-and-taskbar

New icons in Explorer

System icons and important folders such as documents, downloads, music, pictures and videos get a refreshed and colourful look. The file explorer itself looks unchanged in the preview version, but it is quite possible that it will be polished up by the time the final Windows 11 version is released.

Microsoft uncovers Windows 11 – these are the new features

 

Widgets for weather, news, sports results

With Windows 11, the widget window that was just introduced with Windows 10 gets a new look, but apparently it can’t (yet) do more than before. It shows the weather forecast, the latest news from various media or sports and stock market reports. The topics can be personalised and media you don’t like can be hidden.

Microsoft uncovers Windows 11 – these are the new features

Smoother window animations

Already visible in the preview version are new and smoother animations when moving, maximising or closing programmes and windows, which brings us to the new functions in Windows 11.

Rounded corners

Those who like to work with a digital pen will be pleased with the revamped Ink workspace on the far right of the taskbar. All apps that are frequently used with the pen in everyday life can now be pinned there, be it note apps or graphics programmes.

As mentioned, not all design changes are yet included in the leaked preview version of Windows 11. It is likely that the system settings and other areas will also receive a makeover.

And what else?

Windows 10 has so far only provided limited support for the power-saving ARM chips that enable longer battery life. Now the ARM emulation layer is finally to be expanded to include 64-bit support. In concrete terms, this means that in future many more apps can also be ported for Windows devices with power-saving ARM chips.
However, this means that four-year-old PCs and laptops are partly no longer supported, which is not pleasant. Of course, the seventh generation of Intel processors dates from 2016, but unlike in the smartphone market, development here is much slower and more staggered. Processors are still being built and installed in devices for years.

Microsoft uncovers Windows 11 – these are the new features
Microsoft uncovers Windows 11 – these are the new features

How to check if your laptop can run Windows 11

Follow these steps to test whether your PC meets the system requirements for Windows.

  1. Download Microsoft’s PC Health Check here. Open the file, agree to the terms of service and click Install. Make sure the box is checked that says Open PC Health Check, and click Finish.
  2. The app home page that pops up says “PC health at a glance.” At the top, a box reads “Introducing Windows 11.” Click Check now.
Microsoft uncovers Windows 11 – these are the new features

Disclaimer

All Pictures you see on this article are under copyright by Microsoft. For more information about Windows 11 I highly recommend you the Windows Insider link HERE

About the Author:

Drago is a Microsoft MVP for Office Apps & Services and professional for Microsoft Exchange, PowerShell and Cloud services. He works as principal System Engineer and cloud solution architect in a leading swiss IT company and CSP. He is also a Trainer for Microsoft Cloud services and Web 2.0 in swiss schools.

This blog post is part of Microsoft Teams Week. Find more great blogs here.

Reference:

Petrovic, D. (2021). Microsoft uncovers Windows 11 – these are the new features. Available at: https://www.msb365.blog/?p=4545 [Accessed: 1 December 2021].

The post Microsoft uncovers Windows 11 – these are the new features appeared first on European SharePoint, Office 365 & Azure Conference, 2022.

10 Dec 07:20

Microsoft Teams – Public VS Private

by Ella Murphy

This article is a part of a series I plan across the various Microsoft Teams capabilities. One of the considerations for you before deciding to roll out Microsoft Teams is to definitely understand the various capabilities within Teams and understand the impact of it considering your Governance requirements within your organization.

Private and Public teams are in such an area. Understanding the various capabilities of both will definitely prepare you for various decisions that need to be taken if you plan to roll out your Teams either as a Private or a Public one. What I am trying to do in this article is to highlight a set of consolidated features that are to be kept in mind, and of course, there are various articles on the same topic across the internet but I have tried to add certain additional observations as well based on our hands-on exposure in our organization and related implementation.

The fundamental difference between Public and Private Teams are,

  • Public Teams are available for users to be searched and displayed to allow users to join when required
  • Private Teams are controlled by a Team Owner who provides controlled access
    • While Creating the team OR
    • Based on Request Access
Public Team Private Team Remarks
Not-Default Default By Default, when a user is creating Team from UI, the default is always set to Private
Available to be searched across from MS Teams, Outlook etc. Available to be searched  across from MS Teams, Outlook (As per Microsoft Documentation) Recent Change for Private Team – Earlier was searchable only from Outlook (From March 23, 2018, this capability is documented to be available from Teams interface as well) Note that I was not able to replicate this, as per my hands on experience Private Teams were searchable only within Outlook Online and not from Teams Interface.
Users can join and pro-actively post search  Users are required to request access Request access email is sent to the Team Owner, who needs to add requested user Note that I was not able to request access from the Teams interface since the Private teams were not yet discoverable through search from the Teams interface. However, as a user, I was able to send an Email to the private team using Outlook with a request access message.
Users are automatically added while they join Team owner adds a user based on the request. Users are notified/alerted via Team of change Requested Users are notified within the Team interface as “Activity” alert
Observed that if users join a Public team there is no notification as such (From mail/teams interface). Which means, the Team owner is not aware if anyone new has joined or not, until he/she checks the specific team members list. Admins are not notified of a user request to join a team via Team interface.   I experienced that this is achieved only using Email  
As mentioned, Public teams are available to be searched from Teams and Outlook To prevent the search of Private Teams in outlook, can only be handled by PowerShell which is a specific Exchange Online change for the Group created. Set-UnifiedGroup -Identity “<<Team Name>>” -HiddenFromAddressListsEnabled $true I have not tried this with a public team, but I assume that it will also be applicable to a Public team.
Users can invite other users to join Public Team  Only Owner can invite other users from a Private team  Applicable even for inviting External users (if enabled within the Teams Administration setting) Note: External users cannot invite other external users from Public or Private teams 
All users in the team will have access to the related workspaces for View, Edit, Add files (One Note, SharePoint etc.)  Accessible if Owner Approves request  Note that in case of External users for Private / Public teams, the owners can decide what kind of access they can provide to external users within SharePoint.  There are similar controls across all other workloads which are topic of discussion itself about External users and how can they be controlled.
Users can share files with other users within/outside team  Users can share files with other users within / outside team  External users cannot share files outside 

The whole concept of External users and the impact of them on Teams and Various workloads within a Team is a topic which needs to be discussed separately. I assume that the above comparison was able to provide an overview of almost all the considerations to be kept in mind for Private Vs. Public teams.

This blog post is part of Microsoft Teams Week. Find more great blogs here.

About the Author:

As a Solution Architect, my responsibility involves providing consulting and service delivery for my Organization. I support the full business and IT solution lifecycle – strategic planning and design, to implementation and governance. My contributions encompass a broad portfolio of strategies and solutions that leverage Enterprise Implementations utilizing various Microsoft Technologies.

Reference:

Ramakrishnan, P. (2021). Microsoft Teams – Public VS Private. Available at: https://www.c-sharpcorner.com/article/microsoft-teams-public-vs-private/ [Accessed: 1st December 2021].

The post Microsoft Teams – Public VS Private appeared first on European SharePoint, Office 365 & Azure Conference, 2022.

06 Dec 10:03

Supporting hybrid work: Optimizing your environment for great Microsoft Teams meetings

by Martin Rinas

Technical best practices for Teams admins to ensure healthy meetings

The return of employees to the office requires preparation and planning to address capacity, quality and adjust to new patterns of use after more than a year and a half of mostly remote work. Ensuring a great experience is central to our #PeopleFirst strategy for service management.


At Microsoft, we’re listening to employees and customers about their needs, and we’ve gathered some simple best practices to guide you, as Teams administrators, on how to ensure a smooth and successful return-to-office experience. In this blog, we share how existing tools can be leveraged to establish technical readiness by focusing on direct service connectivity, sufficient network capacity, and healthy devices.


Adjusting to using Teams in hybrid environments

The use of online meetings has grown significantly since the pandemic started in early 2020, and one-on-one meetings have become more common than direct calls between two participants. Even as employees transition into hybrid work environments, many expect that usage of online meetings will remain high.


Our customers expect that elements of the hybrid workplace will remain in the mid to long term, as not everyone may go back to the office full time. This will result in higher network capacity demand compared to the pre-pandemic times. The user experience with a cloud service such as Teams depends on the user’s ability to communicate successfully with the cloud. To assure ideal end user experiences, we’ve seen customers quickly make network changes—such as implementing VPN split-tunnel setups—after employees had to continue working from home on short notice.


Now that employees are coming back to offices, our customers want to ensure that high-quality experiences in Teams meetings will continue. It’s vital that IT professionals help our end users achieve this, even as the complexity of devices, identities, locations, and scenarios continue to grow.


Steps to improve Teams experiences

We make every effort to streamline and simplify the delivery of healthy, compliant, and secure cloud services through Microsoft 365. To achieve that, we’ve compiled three steps that enterprises should follow to improve Teams meeting experiences, no matter what devices participants are using or where they are located:

 

1. Use direct service connectivity

For the best user experience in Teams meetings, it is important to open direct connectivity to the service endpoints, specifically those in the optimize category. These are three Teams-dedicated IP address ranges with four UDP ports. See ID11 on our worldwide endpoints document for reference.

Direct Service Connectivity.png

 

As with the VPN split-tunnel implementation, customers are advised to implement proxy bypass and open UDP connectivity to these service-specific endpoints for improved Teams experiences on corporate networks. Most customers already have implemented this, but we recommend validating that these endpoints are still reachable from your offices. The Transport tab in our CQD Power BI reports can help identify networks where direct UDP connections cannot be established and identify areas that do not allow direct service connectivity. In addition, running the Teams Network Assessment Tool allows you to confirm connectivity. If you already have uploaded building information to the Call Quality Dashboard (CQD), it will be even easier as you’ll be able to identify the sites directly.

TCP Streams.png

 

If you have been using Teams live events, consider the implementation of an eCDN solution to optimize the delivery of video streams to in-office viewers without affecting network performance. Please be aware that the view-only experience currently doesn’t leverage eCDN.


2. Analyze network capacity and usage
The hybrid workplace is increasing bandwidth demand from your end users. While Teams is adaptive to the available bandwidth and can deal with low-bandwidth situations, ensuring that sufficient bandwidth is available creates better experiences for meeting participants. We encourage customers to consider monitoring load on the proxies and exhaustion of network address translation (NAT) pools if they are close to capacity.


Additionally, there are related issues to check as employees return to the office. Is there a slow ramp up of employees returning to your company facilities or will you allow all employees to return at once? What is the expected percentage of users going back to the office? Has your company reduced office space, or moved to another location? Is there a need to assess network readiness ahead of opening to meet the needs of your desired plan?


Customers can analyze current usage patterns using the CQD, the Quality of Experience Report set, and the dedicated Back to Office report to identify the busiest hours and days of online traffic. These reports also provide filtering capabilities on ASN and public IP address ranges to sort by specific regions, countries, or sites. We have published updated bandwidth estimates for typical scenarios and advise customers to closely monitor the capacity of relevant network links. Managing end user experiences, including establishing the role of a Quality Champion, and regular quality reviews allow you to identify trends before they become a problem.


If you cannot use any existing network monitoring data, you may want to use the updated network planner in the Teams Admin Center to estimate bandwidth demand. As with any simulation model, this should be used as a starting point and should be completed with network capacity monitoring to refine the estimated bandwidth consumption as more users return to the office.


In addition, some customers consider the limitation of media bit rate for users in heavily constrained sites to control the end user experience. In the near future, you will be able to use roaming bandwidth control to apply dynamic bandwidth policies based on users’ locations to help with heavily constrained sites.


3. Make sure user devices are optimized
Both personal and conference room devices like Teams Rooms play a crucial role in delivering optimum meeting experiences, and we expect that to remain true in hybrid work environments.


Making sure all devices are optimized is crucial. Customers can leverage the device management capabilities in the Teams Admin Center to quickly identify Teams Rooms devices reporting unhealthy statuses and start remediation activities before a site opens. If you are not using Teams Rooms devices in all sites, you may want to work with your Cloud Video Interop provider to ensure all existing rooms are up and running.


Learn more about improving Teams experiences

The three steps outlined above—ensuring direct and unhindered access to the Teams service, validating network capacity, and confirming rooms and devices are healthy—will improve the Teams experience for employees as they return to the office. Follow them as recommended and look for our next blog for Teams admins or explore the Teams site for admins and IT professionals for more resources. As always you can learn more about managing Microsoft Teams at https://aka.ms/SuccessWithTeams on Microsoft Docs. Utilize our sections on managing the service and our network configuration learning path on Microsoft Learn. Bring your questions to our Teams Community to get them answered by our experts and learn from other customers.

 

 

01 Dec 14:12

Online learning tips for neurodivergent students

by Charlie Fletcher
Online learning tips for neurodivergent students

Online learning has become a prevalent part of the educational landscape. This has some significant benefits. It enables learning to continue when distancing is a necessity. It also provides greater access options to students living in rural areas or experiencing mobility challenges. However, for students living with neurodivergent traits, e-learning can be problematic.

Neurodivergence covers a range of students whose cognitive or neurological functioning operates differently from the majority. This tends to include autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD). Learning difficulties like dyslexia form part of this definition, too. It can also be used to describe those navigating mental health challenges.


Read more: Thinking differently about dyslexia in the classroom: Part 1


As an educator, it is important to help students and their parents to achieve the most positive e-learning experience. We’re going to run down a few elements to consider.

Provide structure

One of the common challenges surrounding students with neurodivergent traits can be a lack of structure. Certainly, each child living with such challenges will have their own experiences and needs. Some students navigating autism can find it difficult to be comfortable with an unpredictable learning environment. A novel educational approach can also disrupt concentration for those with ADHD, learning difficulties, or anxiety. As remote learning can represent a looser way of learning, it’s important to implement a sense of structure.


Read more: Answering the Why? How? What? of lesson planning


Work with parents to create practical schedules for classes. When students are learning from home, they’re not always going to be connected to a teacher. As such, you need to arrange reliable, repeatable times at which the entire class gets together. Plan to have discussions, share ideas, and learn together.

This isn’t just about having the same times for the same activities each day. Variation is important, particularly balancing on-screen and off-screen activities. It’s more about giving students some confidence there is an underlying structure to their school day.

Beyond the schedule, it’s also important to understand how you can adapt the learning structure to support greater focus. Often this comes down to talking through solid techniques that can get everyone involved into the right mindset to learn any time of the year. This can include focusing praise on how well students respond to the process rather than leaning on personal traits. It can also involve integrating regular time for breaks and naps into the schedule, even for older students.

Planning strategies to maintain focus can create the kind of learning structure best suited to keeping neurodivergent students engaged.

Consider surroundings

Environmental factors can play a significant role in education. Neurodivergent children may find the elements most disruptive to their wellbeing revolve around the presence of external stimuli. While a school may have a certain amount of control and resources to create a suitable environment, e-learning from home doesn’t always match up. As such, it’s important to work with parents to ensure the students’ surroundings are well-designed to suit their educational needs.

In many cases, this involves allocating a specific learning environment in the home. Not everyone has an entire spare room to dedicate to this. But even a corner of the living room, dining room, or bedroom can be helpful here. This creates a space neurodivergent students can connect to as their class. It also creates a distinct mental separation between school space and home space. You can recommend decorating the area with neutral-colored paint and adding soft lighting. For students who experience disruptive stimming behavior — as with autism or ADHD — it can be wise to remove elements provoking this tendency from the room.


Read more: Teaching students how to manage digital distractions


Alongside in-home teaching areas, many parents are keen to have their children take advantage of outdoor spaces. This can be a great break from the homeschool room, particularly if the student is feeling overwhelmed.

Remember, though, outdoor classrooms can present challenges alongside the inspiration and space the natural environment can offer. Even in a yard, there can be a variety of stimuli proving to be distractions for neurodivergent students. Wildlife can pull focus and weather can create discomfort.

Talk to parents about how to mitigate the disruption, like implementing outdoor furniture to make it feel more like a classroom space. Discuss measures to recognize when the outdoors can function as a break to release some energy. You should also review how effectively to transition back to indoor learning when necessary.


Read more: Can outdoor learning help hone your students’ learning?


Encourage self-care

Neurodivergent students cannot have a successful online education experience if they’re suffering. A recent study found students learning from home are experiencing higher levels of stress than their in-class counterparts. This is before taking into account the challenges of neurodivergence. As such, it’s important to consider the role encouraging self-care in these learners can play. With some additional focus on this, you can successfully support their remote education.

Your priority here is to maintain communication. Teachers need to keep an open conversation with parents about students’ mental and physical wellbeing at home. Talk about the impact this has been on classroom progress.

Most importantly, the student needs to be an active part of these conversations. Some neurodivergent students may not have typical communication skills. But many have excellent insights into what they find challenging, what their emotional state is, and what they need.

It can also be wise to empower students to use the school’s learning management system (LMS) to access emotional support from counselors.

Alongside communication, socialization can be important. Neurodivergent students often face hurdles related to social anxiety and maintaining relationships. Unfortunately, remote learning can be quite an isolating experience. It also doesn’t give these students opportunities to practice coping methods and forge friendships with peers.

You must work with parents to make arrangements for students to study together. Find time and space for them to socialize occasionally, whether online or in person.


Read more: 4 Steps towards digital wellness for students


Conclusion

Online learning can be beneficial to give students greater access to learning. But it can be challenging for those with neurodivergent traits. Teachers need to work alongside parents to build a supportive structure. You also need to ensure an appropriate learning environment. Importantly, emphasize self-care to help students be mentally and emotionally prepared for their education. It’s never going to be easy, but all students deserve the chance for a fulfilling educational experience.

The post Online learning tips for neurodivergent students appeared first on NEO BLOG.

01 Dec 14:12

The Pros and Cons of Free Help Desks + 5 Options to Consider

We weigh the pros and cons of free vs. paid, plus list 5 tools to consider if a free help desk is right for your company's support team.Read the full article

05 Nov 14:35

Wallace - Performance and performance analysis

by Donald Clark

Guy Wallace is a Performance Analyst and Instructional Architect who has spent over 40 years in Learning & Development. With a background as a Journalist in the Navy and then later video production for training, moving into self-paced instruction at Motorola, then as a consultant Curriculum Architecture Design projects for major clients, his focus has always been on ‘performance’, in both analysis or design. His focus could be seen as lying at both the top and tail of learning design – an emphasis on up-front analysis with its eyes on end-point performance. He has published 17 books and many articles on the methods and need for a performance approach to learning, rather than just training. 

Performers not learners 

For Wallace, the culture of L&D is very classroom based, so analysis of real-world performance, in context, seems alien. They are more comfortable with didactic training, leaving transfer and performance to the learner. Customers usually see training as the sole solution to their problem, yet that problem is almost always improving needs and performance beyond learning.   

It is a mistake to focus immediately on content and topic solutions. Wallace reverses this to put actual performance centre stage in the learning process. He is single-minded in his belief that a focus on performance, along with thorough analysis of performance context, unlocks the right solution, so much so that he likes to call his target audience performers, rather than learners. 

Performance Analysis 

As a method he believes that rather than push back at training requests, one should let the analysis data guide the client’s decision making on whether to continue or not. 

This is why Guy believes in thorough up-front analysis. His approach drew from Geary Rummler on guidance and performance analysis focused on tasks and outputs. Also Tom Gilbert with his focus on accomplishments, worthy outputs, not just behaviours. Bob Mager provided the tools on performance analysis, gap analysis and the writing of behavioural objectives. Another huge influence is Richard E. Clark on cognitive task analysis and gap analysis. 

First one must understand whether the actual performance requires memorisation or reference to resources. Rather than automatically produce training, he believes that one should default to job aids, then job aids embedded in training, only then training to improve critical knowledge and skills.  

So ‘performance’ needs to be unpacked through Performance Analysis, ideally starting with a facilitated group process involving 8-12 master performers, also subject matter experts, along with supervisors and management and sometimes novice performers. They must work together to produce a performance model of ideal performance and then review what the gaps are, along with their causes for non-master performers. Identifying non-Knowledge/Skill obstacles is also required. The more that is uncovered through this form of analysis, the greater the eventual impact on performance. 

His four instructional analysis areas are:  

  1. Target audience 

  2. Performance (ideal & gaps)  

  3. Enabling knowledge and skills  

  4. Existing content assessments for potential reuse 

This last approach is to increase the reuse of client content, either “as is” or “after modification” to reduce costs and speed up performance improvement. 

This is accompanied by four means of learning: 

  1. Leave it to informal learning 

  2. Standalone job aids 

  3. Job aids embedded in training 

  4. Training for memorisation and honing of critical skills 

From this, his Modular Curriculum Development method, ADDIE in structure,  has six phases; Project kick-off, Analysis, Design, Development, Pilot, Revision and Release – with four gate review meetings with the clients. 

Influence

Guy has been pushing the performance approach for decades and now that it has come back into the fold, through the informal learning movement, 70:20:10, learning in the workflow and technology such as Learning Experience Platforms, he is seen as one of the gurus in this field. He is part of a movement that seeks to avoid unnecessary instruction. Technology that delivers workflow learning has also given Wallace’s approach new impetus, as job and performance aids are now common in LXP delivery, with its focus on search and pull, rather than push. 

Bibliography 

Wallace, G.W., 2021, Performance-based Lesson Mapping  

Wallace, G.W., 2011, The Curriculum Manager’s Handbook

Wallace, G.W., 2021, The 3 Ds of ThoughtFlow Analysis  

Wallace, G.W., 2020, Conducting Performance-Based Instructional Analysis  

Wallace, G.W., 2011, Analysis of Performance Competence Requirements

Wallace, G.W., 2011, Performance-based Curriculum Architecture Design

Wallace, G.W., 2011, Performance-based Modular Curriculum Development

Wallace, G.W., 2011, Developing Your Management Areas of Performance Competence

Wallace, G.W., 2011, From Training to Performance Improvement Consulting

Wallace, G.W., 2011, The Fifth Management Foci

Wallace, G.W., 2011, Lessons in Making Lemonade, Volume 1

Wallace, G.W., 2011, Lessons in Making Lemonade, Volume 2


01 Nov 08:56

Remembering piezoelectric pioneer Jacques Curie

by Editorial Team
The French physicist, Jacques Curie, born on 29 October 1855, is remembered today for his discovery of piezoelectricity.
18 Oct 08:17

Wales launches drive to keep young people in their homeland

by Steven Morris

Economy minister tells of ‘significant challenge’ of retaining and attracting talent to support country’s economy

The Welsh government is launching a drive to persuade more young people to remain in their homeland amid growing concerns that the percentage of working-age citizens is dropping to worryingly low levels.

Ministers fear that unless the “brain drain” is stopped – and more talented people can be tempted in – within a few decades the country may struggle to pay the bills to look after its ageing population.

Continue reading...
18 Oct 08:12

learning in the complex domain

by Harold Jarche

Personal knowledge mastery (PKM) can be a lens to examine how knowledge flows in organizations and human systems, especially from a perspective beyond formal training and education.

“A model of curation for the digital era that is being used in health and care is Harold Jarche’s ‘Personal Knowledge Mastery’ (PKM). This is about individuals making the best use of their networks and other sources of knowledge so that they can keep up to date with the most effective thinking in their area and practice new ways of doing things. Leaders who take responsibility for their own effectiveness through PKM create leverage and value for their organisations. The underpinning framework for curation within PKM is ‘seek, sense, share’. ‘Seeking’ is about finding things out and keeping up to date; pulling’ information, but also having it ‘pushed’ to us by trusted sources. ‘Sensing’ is about making sense and meaning of information, reflecting and putting into practice what we have learned and plugging information into our own mental models and turning it into knowledge. ‘Sharing’ is about connecting and collaborating; sharing complex knowledge with our own work teams, testing new ideas with our own networks and increasing connections through social networks.” —UK National Health Service White Paper: The new era of thinking and practice in change and transformation

In addition, PKM is much more than a model of curation.

“Seek > Sense > Share are three elements at the core of Harold Jarche’s Personal Knowledge Mastery (PKM) Framework. With PKM, he shaped one of the most persuasive approaches to personal and professional development, combining natural ways of learning with an approach to sensemaking and contributing to a larger collective.” —GIZ.DE

Personal knowledge mastery is a framework that connects working and learning. Much of what professionals and most adults learn is from experience and interactions with other people, at work or outside of it. We learn from experiences and exposure to people and ideas.

The US-based Hearing First community “supports families and professionals to help children with hearing loss have the opportunity to take advantage of access to sound — a critical building block for future success”. It uses the PKM framework as a foundation for learning.

Our world has opened up, and traditional learning channels (while still completely valid) are no longer sufficient on their own. We need to embrace this new personal learning channel: the digital world.

There is a solution for personal learning in the Digital Age: Personal Knowledge Mastery (or PKM). By incorporating PKM into our new age behavior, we’ll be able to accomplish the goals we have to improve ourselves and our listening and spoken language practice. You may have heard of personal knowledge management, but we don’t want to just manage knowledge. We want to master it.

The first step to personal knowledge mastery is the Seek-Sense-Share learning model. It was created by Harold Jarche, connected learning specialist, and it’s a concept that Hearing First supports for learning through our personal and professional networks. This learning model helps us make that data meaningful by being productive in our work and in the world of LSL. So, how does Seek-Sense-Share work exactly?” —PKM at Hearing First

 

Cynefin

“The Cynefin® framework was developed to help leaders understand their challenges and to make decisions in context. By distinguishing different domains (the subsystems in which we operate), it recognises that our actions need to match the reality we find ourselves in through a process of sense-making. This helps leaders cultivate an awareness of what is really complex and what is not and respond accordingly, so that no energy is wasted in overthinking the routine but they also never try to make the complex fit into standard solutions.” —Cynefin Centre

Cynefin can help us connect work and learning, especially for emergent and novel practices, for which we do not have good or best practices known in advance. When we want to create a conducive learning environment for knowledge workers, the Cynefin framework helps us to see the inherent weakness of instructional systems design (ISD) which works from the premise of predetermined learning objectives and activities, usually based on good and best practices observed in the workplace.

cynefin martin berg

Image: Martin Berg

The Cynefin framework adds to PKM with levels of abstraction. Low levels of abstraction mean that information and knowledge are understandable to few people. The lowest level would be a person understanding something only to themselves. Higher levels of abstraction would make this more understandable to more people, but would lose nuance and context in the process. High levels of abstraction are good for things that everyone should understand, such as the symbols and markings on a map.

The image below takes the basic PKM model —  with teams in blue, communities in red, and networks in green — along two axes: high & low structure, and low & high abstraction. These are split in half — one for the Complex domain, and the other for the ordered domains (Complicated & Clear). The Chaotic domain has unique conditions and requires a different approach, beyond this post.

There are (at least) two modes for each form required to work and learn.

Teams can be semi-permanent and collaborative in ordered domains but should be quicker-forming temporary negotiated hierarchies in the complex domain.

Formal communities can provide continuity in ordered domains but informal communities are needed to provide more flexibility in crossing expertise silos and disciplines.

Established knowledge hubs provide all the structured information that a discipline requires, like the Project Management Body of Knowledge, but open knowledge networks are better when facing the complexity of the SARS-CoV-2 pandemic as they need to quickly incorporate new findings and knowledge.

Given that all organizations are likely to face complex challenges at some time, these forms for complexity should be incorporated into how organizations are structured, how learning can be supported, and how professionals engage. For some aspects of this, an individual may need permission, but much of this can be done independently and especially interdependently between professionals.

PKM with a cynefin lens

Examples from the Complex Domain

MSF

Medecins sans frontières (MSF) has learned to deal with complexity and has come up with some guidelines to help those in the field.

  • Everything is political and influences medical assistance.
  • Gut feeling is very important to assess complex situations.
  • Finding common ground between parties in conflict is very difficult and too often simple, but ineffective solutions are chosen.
  • The situation is always changing and there is a need for constant reflection, as individuals and at an organizational level.
  • Impartiality [trust] is the ‘red line’ that cannot be crossed.
  • Every action is a compromise.
  • Conflicts are messy & dirty – therefore the humanitarian assistance is messy & dirty.
  • Learning through constant discussions is critical for all members of the organization.
  • MSF has a culture of debate and exposing the truth and this lets the organization move forward.

‘Marie Noelle Rodrigue, operations director of MSF in Paris, said: “The time has come to explain the fragile equilibrium between the price it is necessary for an organisation to pay so that you are helping the victims.
“Often that means making a compromise to a degree where you are helping the authorities. This is a question that no-one has wanted to examine and it is good that MSF have looked into it and I think we are happy that we’ve done it honestly.”’ —The Guardian 2011-11-20

The Linux Kernel

“Hierarchies might let you make one-off decisions at a faster rate, but, ultimately, they’re just not as responsive in the long term … Today, [the Linux kernel] stands as the very best solution to a growing number of technological problems, but it didn’t spring from a single person’s head overnight. Decades of work made it the flexible, superior solution it is today. Local improvements and impassioned debates between key stakeholders continue to refine it.” —OpenSource 2016-03-29

The Chaordic Organization — VISA

chaordic [kay-ordʹ-ic], adj., fr. E. chaos and order.  1. The behavior of any self-organizing, self-governing, organ, organization, or system that harmoniously exhibits characteristics of both order and chaos.   2. Patterned by chaos and order in a way not dominated by either.  3.  Blending of diversity, chaos, complexity and order characteristic of the fundamental organizing principles of evolution and nature. —DeeHock.com

“Our current forms of organization are almost universally based on compelled behavior — on tyranny, for that is what compelled behavior is, no matter how benign it may appear or how carefully disguised and exercised. The organization of the future will be the embodiment of community based on shared purpose calling to the higher aspirations of people.

Formation of a chaordic organization is a difficult, often painful process, but one also filled with joy and humor. Entirely different dynamics of judgment, behavior, capacity, and ingenuity can evolve. Small shifts in deeply held beliefs and values can massively alter societal behavior and results — in fact, may be the only things that ever have. That is my hope for our future.

I know it can happen. I’ve been there — or at least gone part of the way — during the formation of VISA and other chaordic organizations. It’s very difficult to put in words, for in truly chaordic organization there is no destination. There is no ultimate being. There is only becoming.” —Dee Hock 1999

Silo Thinking — Orthodoxy, illusio, and playing the scientific game

If this pandemic is teaching us anything, it’s that experts disagree, nobody has all the answers, and we are mostly making things up as we go. In a crisis it is important to act but even more important to learn as we take action. Add in the human factor that some people are always trying to take advantage of any situation and we start to float in a liquid surround of misinformation, propaganda, and half-truths.

Medicine is composed of many silos of expertise. They often see situations from quite different perspectives. Experts in all disciplines have to get out of their silos and connect in multidisciplinary subject matter networks. A lone expert, or even a lone discipline, is obsolete in the network era. Only cooperative networks will help us make sense of the complex challenges such as a pandemic.

“Three fields—political, state (policy and regulatory), and scientific—were particularly relevant to our analysis. Political and policy actors at international, national, and regional level aligned—predominantly though not invariably—with medical scientific orthodoxy which promoted the droplet theory of transmission and considered aerosol transmission unproven or of doubtful relevance. This dominant scientific sub-field centred around the clinical discipline of infectious disease control, in which leading actors were hospital clinicians aligned with the evidence-based medicine movement. Aerosol scientists—typically, chemists, and engineers—representing the heterodoxy were systematically excluded from key decision-making networks and committees. Dominant discourses defined these scientists’ ideas and methodologies as weak, their empirical findings as untrustworthy or insignificant, and their contributions to debate as unhelpful.”
 —Wellcome Research 2021-05-24

Open Sharing on Twitter

Here is an example of 13 people in various specialties — epidemiology, complexity, public health, sociology — from four countries (CA, UK, US, ZA) who share their knowledge in the open and by combining their views on a platform like Twitter, we are better informed about the current coronavirus pandemic
Twitter Pandemic List.

Democratizing Knowledge

“The pandemic has empowered us to become co-creators, co-producers, and co-distributors of what we know — While each unique, these cases are all organic efforts to share useful knowledge and create new venues of access. The gains of the burgeoning knowledge commons in response to Covid-19 are twofold.

First, individuals are empowered to become co-creators, co-producers, and co-distributors of information for the benefit of their communities. The need for local knowledge in this pandemic, combined with the use of digital venues like Google Sheets or Minecraft, enables individuals to participate in knowledge production when ordinarily they would not … Second, the accessibility of these knowledge products enables innovation. The release of patents under an open license means that inventors and manufacturers can build needed solutions without the worry of a lawsuit.” —Wired 2020-05-27

• Did you find this post useful? Check out the perpetual beta series

06 Oct 05:43

Never compromise safety thanks to IECQ

by Claire Marchand

Semiconductors rarely make headlines in the mainstream press – they are usually confined to trade magazines and specialized blogs. This has changed recently because of a shortage in the supply chain of chips that potentially affects all industry sectors whose products rely on electronics.

06 Oct 05:43

Latest standardization figures at SMB Meeting in Dubai

by editorial team
The hybrid SMB meeting took place over the weekend during the IEC General Meeting in Dubai and was an opportunity to check out the latest standardization figures.
06 Oct 05:43

Welcoming SIRIM into the IECEx CB network

by Editorial team
IECEx, the IEC System for Certification to Standards Relating to Equipment for Use in Explosive Atmospheres, has welcomed SIRIM into its certification scheme for equipment.
06 Oct 05:42

Risk prevention in the Ex sector

by Claire Marchand

Nowadays, workplace safety is of major importance, regardless of business size or type of activity. Many countries around the world have put in place strict health and safety rules and regulations to protect their working populations.

06 Oct 05:42

IEC recognizes four experts with its Thomas Edison Award

by Editorial team
The IEC has presented its Thomas Edison Award to four individuals who have made outstanding contributions to the IEC.
17 Sep 06:29

Top KM Threats & Opportunities for 2021 & 2022

by Lauren Trees
20 Aug 06:53

6 Digital tools that help teachers create effective rubrics

by Diana Z
6 Digital tools that help teachers create effective rubrics

The grading process is time-consuming. It can be overwhelming and frustrating. Grading can even influence what assignments they choose to give to students, anticipating the workload of marking each submission. A solution comes from rubrics, which are benchmarks used in grading to help assess students’ learning.

Rubrics include the criteria for evaluation and the level of performance with descriptions for each. They help students understand what is expected of them and therefore regulate their work according to the demands. Also, rubrics help teachers provide objective feedback to students, facilitating the desired outcome, which is deep learning.

Consequently, rubrics not only facilitate the marking process but they make it objective, concise, and reliable when grading assignments.


Read more: What is the role of rubrics in performance-based education?


The rubric’s features are essential because they provide clear guidelines for the teacher. The goal is to reach a consistent grading process.

6 Digital tools that help teachers create effective rubrics

Although pen-and-paper rubrics are generally time-consuming, web-based rubric generators are more efficient. Teachers can access templates that work across the curriculum, various subjects, and learning activities. They can also  customize available templates to suit their classes’ needs and reuse rubrics anytime.

Here are six digital tools for creating effective rubrics:

  1. Rubistar

    Rubistar is a web-based rubric generator for PBL activities that allow teachers to create or customize premade rubrics for free. Teachers have to register to save and reuse their rubrics. Once created, they can view, edit, analyze or delete rubrics at any time.

    The analysis feature provides information regarding how many students received the same rating, whether the assignment was complicated or straightforward, ambiguous or clear for students, and if the criteria of a certain level of performance were hard to reach. As a result, teachers can adapt their rubrics and lessons to suit the students’ needs and ensure progress.

    Teachers can choose from various rubrics for oral presentations, research, writing, science, work skills, products, math, art, music, or reading. Then, they can establish each criterion from a list and the levels of performance with their description.

  2. Rubric Maker

    Rubric Maker is a tool that assists teachers in the assessment process, helping them articulate their expectations and how the students will be evaluated. Students receive a complete scheme of goals to reach success.

    With this tool, teachers can create quality rubrics using pre-established quality ratings, such as “exceed expectations, meets expectations, needs improvement, and below expectations”, or customize them to suit the needs of each class.

  3. Quick Rubric

    Quick Rubric is a tool used to score assignments based on how students meet specific criteria. Rubrics can be used to evaluate various performance-based assignments, such as oral presentations, essays, or projects. Teachers can create rubrics that have different formats depending on the number of criteria and the levels of quality. The performance ratings can be numerical, descriptive, or both.

    With Quick Rubric, teachers can eliminate grading bias due to the specific criteria laid out in the rubric. This translates into teacher scoring accuracy when multiple evaluators use the same rubrics for the same assignment.

    The templates provide three scoring levels: proficient, emerging, beginner, but you can personalize the rubrics by editing them or adding new ones. The final rubric contains the title, description, and instructions.

  4. OrangeSlice: Teacher Rubric

    OrangeSlice: Teacher Rubric is a Google add-on that makes grading productive and professional. Teachers create, distribute and receive submitted assignments from students through Google Classroom, and Teacher Rubric facilitates the grading process. The rubrics appear in the right half of the Google Docs document to be assessed, and the final grade is a few clicks away.

    This tool allows teachers to offer objective feedback to all students while benefiting from a consistent grading process.

    When you open an assignment, the add-on assists the grading process. Teachers have to choose first between descending or ascending performance level progression, then between traditional scoring (A, B, C, D, E, F), “great, good, needs more” (which can be changed later), or create a new ranking. Also, teachers have to choose the categories to be evaluated or create new ones before generating the rubric.

    Educators can alter the final score by using the options “extra credit,” “late penalty,” or “plagiarized penalty” and then automatically process and introduce the grade details into the students’ Google Docs assignment who can see in detail how they were evaluated. They can also generate holistic and analysis rubrics for a better understanding of student achievement.

  5. TeAch-nology

    TeAch-nology is a website full of resources for teachers, including a large selection of rubrics. Teachers can choose to generate numerous rubrics on different activities or create personalized ones. The templates make grading easy, especially when students can choose how to present their work. They can generate templates suitable for various products, such as essays, posters, or presentations, and ensure they stay objective and maintain fairness even when assessing different outputs for the same task.

    Rubrics allow students to evaluate their work or their peers’. With more than five hundred printable rubrics and the general rubric generator, teachers have a rubric suitable for every assignment.

  6. NEO LMS

    NEO LMS has a rubrics feature that allows teachers to create, edit, save and reuse rubrics for a straightforward grading process. Rubrics are integrated within the assignment submission page. You can create a new rubric for each assignment or reuse the ones in your library.

    NEO offers a personal library and access to school, organization, and district libraries with available resources such as rubrics created by fellow teachers. Schools can also ensure that grading is objective and consistent by using the same rubric for similar tasks. Moreover, they can ensure transparency and reliability regarding students’ learning levels.

    Creating rubrics with NEO is easy since teachers can use the template provided by the platform or reuse the ones from the library. When grading, teachers need to select a rubric to automatically insert it into the students’ submission page. With only a few clicks, they can grade their submissions, adjust the grades by subtracting points, for instance, and leave a comment or an attachment as feedback. As a result, students understand their performance by receiving the full details of their evaluation.

To sum up

Rubrics are indispensable tools for a teacher, making grading time-efficient, objective and reliable. They also come with benefits for the students in terms of clear expectations and realistic performance analysis.

The post 6 Digital tools that help teachers create effective rubrics appeared first on NEO BLOG.

11 Aug 07:01

Instructional Design Research, Tools, & More: ID Links 8/10/21

by Christy Tucker

As I read online, I bookmark resources I find interesting and useful. I share these links periodically here on my blog. This post includes links on instructional design research and principles, productivity, video, animation, visual design, an elearning example, and my article on scenario-based learning for TD Magazine.

Instructional design research and principles

Spaced learning

Space invaders – E-Learning Provocateur Ryan Tracey clarifies related and often confusingly overlapping terms related to the spacing effect for learning: spaced presentation, spaced practice, spaced retrieval, distributed practice, expanding practice, and more.

Learning objectives

Learning Objectives: GOAL!?! – 3-Star learning experiences Summary of research on the value of telling learners the objectives at the beginning of training. The research supports giving learners specific “focusing objectives” to help them recognize what’s important. However, that doesn’t mean those objectives need to be the same formal learning objectives we use as IDs. In fact, using objectives as multiple choice questions to show people what they don’t know yet may be effective.

As instructors and designers, we need to keep in mind that there can be other reasons to use objectives and we need to clearly distinguish between objectives that we use as instructional/learning designers versus the ones we might use for learners.

First Principles of Instruction

First Principles of Instruction summary • M David Merrill • myBRAINisOPEN A 12-part series on Merrill’s First Principles of Instruction

In First Principles of Instruction, Merrill identifies five core instructional design principles which he has synthesised from his review of all of these theories, frameworks, and models. He then sets out ways in which these principles can be systematically used to inform the design and development of learning activities, (both online and in a face-to-face context). Merrill then makes a case that following these principles should lead to effective, efficient and engaging learning experiences.

Problem-centred: Learning is promoted when learners acquire knowledge and skill in the context of real-world problems or tasks.

Activation: Learning is promoted when learners recall or apply existing knowledge and skill as a foundation for new skills.

Demonstration: Learning is promoted when learners observe a demonstration of the knowledge and skill to be learned.

Application: Learning is promoted when learners use their newly acquired knowledge and skill to solve new problems or carry out tasks.

Integration: Learning is promoted when learners reflect on, discuss and defend their newly acquired skill or integrate the skill into a real-world activity.

Selected quotes from the summary of Merrill’s First Principles

Instructional design job listings

How Instructional Design Is Operationalized in Various Industries for job-Seeking Learning Designers: Engaging the Talent Development Capability Model | SpringerLink Research comparing job listings to the ATD capability model.

Using the preferred reporting items for systematic reviews and meta-analyses (PRISMA) organizational framework as our method of reviewing job postings, we found that instructional design, talent delivery and facilitation, technology application, communication, and collaboration and leadership capabilities appeared the most frequently.

For corporate settings, more jobs than expected require bachelor’s degrees and fewer jobs than expected require a master’s degree or higher. On the contrary, for higher education settings, more jobs than expected required master’s degrees or higher and fewer jobs than expected required bachelor’s degrees. While this may not be surprising to some job seekers, it does indicate that job applicants with bachelor’s degrees already possess the educational requirements for a corporate role.

Improving productivity

Introduction to the Zettelkasten Method • Zettelkasten Method

A method for taking notes where you can link ideas together. This was originally designed as a method for taking paper-based notes but with effectively a hyperlink between ideas. This isn’t tagging per se, but a way to connect ideas and keep a large amount of notes organized over time.

85+ Most Useful Time Saving WordPress Keyboard Shortcuts

I want to learn to be more efficient using the blocks in WordPress, so I’m saving this list of keyboard shortcuts

Video, animation, and visual design

Free Stock Video Footage, Royalty Free Videos for Download

Free stock videos; commercial use is OK. These could be used as b roll, transitions, or backgrounds.

CreateStudio Animation Made Easy – CreateStudio

Tool for creating 2D and 3D animated videos. The basic plan is $99 for a lifetime license, with limited characters and templates.

Tint and Shade Generator

Provide hex codes for colors and get tints (lighter) and shades (darker) of that color. Useful for staying within a color scheme while still providing some variety

Scenario-based learning

Survive an Earthquake – INSTRUCTIONAL DESIGN BY TRACY CARROLL

In this sample, Tracy used Storyline to simulate the look and feel of BranchTrack. The choices are styled with numbers and text like BranchTrack, and the push transition between slides mimics the effect between choices. If you don’t have BranchTrack but like that style, this shows how you can accomplish the same effect in Storyline. This could be used for a lot of branching scenarios or short sims.

Set the Scene

My article on scenario-based learning for TD Magazine (members only)

Additional resources

Check out my complete library of links or my previous bookmarks posts.

The post Instructional Design Research, Tools, & More: ID Links 8/10/21 appeared first on Experiencing eLearning.

04 Aug 12:12

Goldman Sachs | Firm won't cut 95-hour week despite backlash - raises pay instead

by HR Grapevine
Goldman Sachs | Firm won't cut 95-hour week despite backlash - raises pay instead
Earlier in the year, junior workers at Goldman Sachs spoke out about the extreme burnout being caused by “inhumane” working conditions and long working weeks, as reported by the Financial Times...

This week, following complaints from junior workers of 95-hour working weeks, abuse from senior staff and no work-life balance, the finance giant Goldman Sachs said it would increase its base pay for first-year bankers to £80,000, with Second-year bankers seeing a rise to £89,000.

Earlier in the year, junior workers at Goldman Sachs spoke out about the extreme burnout being caused by “inhumane” working conditions and long working weeks, as reported by the Financial Times.

In a leaked 11-page presentation created by 13 Goldman Sachs workers back in March, research showed that the average amount of hours worked in a single week totalled 105, whilst the average amount of hours slept per night was just five.



“There was a point where I was not eating, showering or doing anything else other than working from morning until after midnight,” commented one of the anonymous creators of the document.

At the time, bosses at the firm pledged to change historic practices to prevent younger workers from getting burned out. Among the various touted improvements to working life promised by CEO David Solomon was ‘work free Saturdays’ – however he also warned that it was likely high volumes of work would continue across the company.

“In the months ahead, there are times when we’re going to feel more stretched than others, but just remember: If we all go an extra mile for our client, even when we feel that we’re reaching our limit, it can really make a difference in our performance,” he said, as reported by The Guardian.

‘It isn’t the plaster that can heal all issues’

Commenting on the news, Iain Thomson, Director at Sodexo Engage, noted that wage increases do not serve to solve the ‘underlying issue’ around the cases of burnout among Goldman Sachs’ younger workers.

“While salary is of course a small part of work incentives, it shouldn’t be the be all and end all, nor will it resolve the underlying issues that often drive top talent to leave, like burnout or poor management,” he said.

“Enhancing salary only papers over the cracks. Instead, employers should review their overall employee experience from workplace culture, to work-life balance and employee benefits.”

In addition, the outcry from workers has gained criticism from senior executives from within the banking institution. Xavier Rolet, who was head of the London Stock Exchange for eight years, called the young workers ‘entitled’, noting in a LinkedIn post that he would regularly work 130 hours a week, seven days a week in the 1980s.

Speaking to the Financial News, he said: "It's a free world. If you don't love what you're doing or think the hours don't suit your lifestyle, by any means do something else.”

21 Jul 13:19

Learning tech capability in L&D could be holding it back

by Ian Turner

No matter your role on a learning and development team or the structure of your organisation, it’s important to consistently review and evaluate innovative technologies, tools, and trends to determine if they make sense for your organisation.

Technology changes quickly, and if you don’t keep up, you’ll get left behind. Learning technologies (LT) do not just include new software and emerging tech. They also include the LT ecosystem, which is best described as a collection of people, processes and tools that deliver, integrate, and support the L&D function across your organisation. That whole ecosystem requires knowledge in assessing, defining, and articulating relevant requirements. Ensuring that the latest advancements help both the learner and the organisation means understanding the learners’ needs and overall experience.

What L&D capabilities are missing

Results from a recent L&D Capability Model self-assessment, where thousands of L&D professionals diagnosed their own skills and capabilities, showed technology application is one of the lowest-rated capabilities. In fact, of the 8,600 L&D professionals who took the assessment, only 40 percent received a high rating.

A recent report on L&D and Covid by the Ken Blanchard Companies cited concerns among L&D professionals on how to skillfully use new tools and platforms. Thirty-two percent of the 1,000 L&D respondents said e-learning and digital development tool proficiency is holding back their L&D staff.

It’s well documented, and has been for the past few years, that technology started playing an increased role in training delivery even before the pandemic halted face-to-face learning events. In 2019, more than 50 percent of all learning hours were delivered via technology-based methods, the highest percentage ever recorded.

The progress of tech in L&D

Just five years ago, 48 percent of organisations used technology-based simulations in learning and development programs, 75 percent used non-technology-based simulations, and 88 percent used scenario-based learning. Those numbers have increased to 75 percent, 87 percent, and 98 percent, respectively, reports Simulations and Scenarios: Realistic, Effective, and Engaging Learning.

“Technology should support learning, not dictate it,” Tareq Omairi wrote in a recent learning blog post.

Along with simulations, e-learning, and video, some of the technology and tools that can aid in training delivery include artificial intelligence (AI), augmented and virtual reality (AR, VR), and social learning.

L&D and artificial intelligence

According to JD Dillon, AI is defined as “a machine’s ability to perform cognitive functions typically associated with humans, such as perceiving, reasoning, learning, interacting, creating, and problem solving. AI commonly utilises machine learning algorithms to detect patterns and learn how to make predictions and recommendations by processing data and experiences, rather than by explicitly receiving programming instruction.”

Learning and development professionals can choose from a range of existing AI-enabled applications, including:

  1. Using data to proactively find individual employees’ knowledge and skills gaps. Then, supply the right support to the right person at the right time at the speed and scale of a global business.
  2. Applying data to improve measurement practices and, through the application of specialised machine learning, decide how L&D solutions are (or are not) affecting targeted business goals.
  3. Translating content in real time into any available language with rapidly increasing accuracy, and writing content faster and at a quality level that is similar to human authors.

AR and VR are not mainstream in learning but can be used for visualisation, immersion, and storytelling. Cost and digital literacy are factors in choosing to use them, and the time involved in prototyping AR and VR is still too long for the needs of many businesses and education providers.

To supply relevant and valuable training solutions to your organisation, you need to search for the most efficient tools to improve performance. Social media tools and new, creative ways to use them can help improve learning engagement and performance.

“L&D can move to a more proactive state with the newer tools now available,” Chad Udell wrote in ‘Shock of the New’. It’s a fun time to be in L&D, and “this new normal offers lots of opportunity to enable real change and improve performance in ways we have only dreamed about.”

About Avado

At Avado, we believe in unlocking potential and changing lives. Learning with us makes real, lasting change happen for individuals, and entire organisations. Through our connected learning experiences in Data, Marketing, People, and Agility, we can help you drive real change.

Going beyond just technical skills, our award-winning programmes help teams find success in an ever-evolving world. With a strong emphasis on the behavioural and mindset shifts needed to embed new capabilities, we’ll work together to develop leaders, teams and individuals through interactive learning experiences.

The post Learning tech capability in L&D could be holding it back appeared first on Avado.

21 Jul 13:19

5 things we learned from DfE’s final apprenticeship reform review

by Billy Camden

The Department for Education has today published its final review of its apprenticeship reform programme.

Officials established the programme, which has seen the launch of the levy and introduction of standards among other changes, in 2015 and was scheduled to be delivered by the end of the financial year 2020-21.

Today’s report reveals the government’s progress against key performance measures such as its 3 million starts target, diversity goals and public sector apprenticeship objectives.

FE Week has pulled out the five main findings.

 

1) 3m target missed by 600,000 starts

In the 2015 manifesto, the Conservative Party set an ambitious target of 3 million apprenticeship starts between 2015 and 2020. A target they kept in the 2017 manifesto.

Since the apprenticeship reforms began in May 2015, by January 2021 there had been 2,373,100 starts, representing 79.10 per cent of target.

The DfE’s report says that while the 3 million target was missed by 626,900 starts, over the same period apprenticeships have “become of longer duration and are now co-designed with employers” meaning that the starts now made on the programme are “into higher-quality training”.

 

2) Diversity and inclusion targets met

The DfE set a target to increase the proportion of apprenticeships started by people of black, Asian and minority ethnic (BAME) backgrounds by 20 per cent by 2020. This would result in 12.1 per cent of starts being by apprentices of BAME backgrounds.

This was exceeded, reaching 13.3 per cent starts from people with BAME backgrounds by 2019/20, which is higher than the BAME employment rate of 11.7 per cent, according to today’s report.

The DfE also set a target to increase the proportion of apprenticeships started by those declaring a learning difficulty or disability (LDD). They aimed to increase LDD starts by 20 per cent, uplifting starts to 12.5 per cent.

This was achieved in 2019/20, with 12.5 per cent starts declaring an LDD, the report states.

 

3) ‘More to do’ on achievement rates

The achievement rate for apprenticeship standards sat at just 58.7 per cent in 2019/20.

While this is an 11.8 percentage point increase from 46.9 per cent in 2018/19, the DfE admits “we know there is more to do”.

Additionally, official government data published in March showed that just 60.2 per cent of apprentices training on new-style standards stayed on their programme until the end in 2019/20. This figure sat at 48.3 per cent the year before.

Skills minister Gillian Keegan told FE Week’s annual apprenticeship conference in April that she has ordered an investigation into the “astonishingly” high drop-out rate.

 

4) Skills Index for apprenticeships falls – again

The principal measure the DfE uses for monitoring productivity impact is the Further Education Skills Index.

The Index estimates the aggregate value of the skills supplied by the FE system each year by aggregating earnings returns for all adult learners and apprentices who successfully complete their courses.

For apprenticeships, the Index increased every year from 2012/13 to 2017/18 but fell by 26 per cent in 2018/19 and a further 17 per cent in 2019/20.

apprenticeship

FE Skills Index report 2021

The DfE blames this recent decline on a “fall in participation, resulting in lower achievement volumes”. Activity in 2019/20 was also “impacted by Covid-19 restrictions, which led to an increase in breaks in learning and fewer achievements than expected”.

However, the average value-added of individual apprenticeships has increased each year, with each learner who completed an apprenticeship in 2019/20 generating 27 per cent more  value  than  in  2012/13. 

This has been driven by a shift from Intermediate towards advanced and higher apprenticeships, and towards sector subject areas associated with higher returns (engineering, construction, and ICT), the report says.

 

5) Public sector target missed

Public sector bodies in England with 250 or more staff have had a duty to aim to employ an average of at least 2.3 per cent of staff as new apprentices over the period 2017 to 2021.

Between that period, public sector employment of apprentices sat at an average of 1.7 per cent.

Following failure to hit the target and in “order to build on this success and continue to encourage public sector bodies to invest in apprenticeships”, the DfE has extended the public sector apprenticeships target for a further year from 1 April 2021 to 31 March 2022.

The post 5 things we learned from DfE’s final apprenticeship reform review first appeared on FE Week.

21 Jul 12:57

5 Key Takeaways from The Godel POD: Keeping Connectivity in a Hybrid World

by Kara Frearson
I.gardner.gb

Good summary.

Last month, I was excited to launch the first instalment of The Godel POD, an exclusive podcast by Godel Technologies. My three-part series is called “Keeping Connectivity in a Hybrid World”. Throughout the series, I invited my guests to talk about topics and trends surrounding hybrid working, a discussion that I’ve found to be having with everyone I speak to as people gradually return to the office.

While it’s hard to summarise all the insightful conversations that took place, here are my 5 key takeaways from my podcast series.

1. Flexible working is part of the new normal

Not only have we demonstrated how we now communicate with our teams, but also how many have been given some form of ‘life’ back. Reducing the hours of commuting, being able to nip out and pick the kids up from school, being able to self-isolated during the pandemic and it not being detrimental to the business, or spending time with the family. It’s about finding that balance and flexible working allows us to do that.

On the flip side, I also posed the question about burnout, and without that commute, are we having the wind down time we would have previously had. Paul Pilling, Engineering Experience and Process Principle at Covéa Insurance, definitely agreed as he described losing that 40-minute commute to listen to music or a podcast was a challenge at first, but it’s also given him the flexibility to see his family. In episode 1 Dan McNeil, Director of Engineering at Comply Advantage made a point to highlight “I think we need to manage burnout and we need to encourage people to have downtime,” which key to for us to remember the responsibility we have as people.

2. The water cooler conversations are here to stay

The podcast brought to life how frequently we had these ‘water cooler conversations’ and how important they are in the day-to-day life, something I think we all took for granted before the pandemic. It’s still possible to have these conversations in the hybrid world by blocking out that time to have those informal chats with your team. In episode 2, I asked Paul whether he thought these conversations would fizzle out, he said, “I think it depends on your workforce and the culture of your organisation, because those water cooler chats for us have been on slack or teams.”

It’s no surprise that people want that social interaction and these chats are important going forward. People are using their digital platforms such as LinkedIn and Instagram to stay connected about more than just work updates.

3. It’s more important than ever to check in on people’s wellbeing

Whether it’s from home, or within the office, I think something we can take away from the last 18 months is checking in on people, asking how they’re doing and communicating. And when I say communication, I don’t just mean “how are you feeling about this sprint”, it’s more “how are you feeling this week?”. Remembering your team are people too, with families at home, children up through the night and dogs that need walking in the rain. Sometimes, talking over the screen isn’t always enough, as Constantine Grishel, Agile Delivery Coordinator at Godel points out in episode 3. “I try to go to the office several days a week, and it really helps. Even with mental health because when you see real people, you see your teammates.”

4. Be sure to include those that haven’t returned to the office

I asked all my guests “how do we stop bias to those who aren’t working in the office” because I think it’ really important to remember that although people are gradually returning, a lot haven’t, and those people shouldn’t be excluded for being remote. I like the idea of some of the things discussed such as everyone doing the call from their computer and having their screen turned on, ensuring everyone has a voice in a meeting and making everyone feel involved, wherever they are. As Diana Grishel, Senior Agile Delivery Coordinator at Godel mentioned, we were all in this position last year, so there is an element of empathy and we have already been in those person’s shoes.

5. It really is possible to stay connected in the hybrid world

I think staying connected in a hybrid world is about picking up on the cues when you know a team member isn’t happy or feels disconnected. This relates nicely to something Diana mentioned in episode 3, saying, “those cues that the team isn’t communicating on a good level, or they didn’t have trust in the team, this is something that we, as ADC’s can try to engage in the team and find out what’s going on.”

Something we come across a lot here at Godel is CTO’s, Development managers and Heads of divisions talking about how they can get the most from their team. How can we have more storyboard points achieved? How can the velocity improve vs the overhead count? How can my team be happier? But the one thing which comes up about how to get the most out of any team is down to us as people – and the level and quality of communication we provide.

If you are interested in appearing in an upcoming podcast, please get in touch using the enquiry form.

The post 5 Key Takeaways from The Godel POD: Keeping Connectivity in a Hybrid World appeared first on Godel Technologies.

29 Jun 06:33

IEC issues new report on digital healthcare

by Editorial team
The IEC has published a new report identifying the advanced technologies that can be used for digital healthcare in 2030 and how they are addressed by standardization work.
28 Jun 07:42

A curriculum for a complex world – students’ views on digital literacy in the curriculum

by Debbie McVitty

Debbie McVitty reflects on the findings of a Wonkhe/Adobe qualitative study exploring student perceptions of the links between their curriculum, their future aspirations, and their development of digital literacy

The post A curriculum for a complex world – students’ views on digital literacy in the curriculum appeared first on Wonkhe.

14 Jun 07:19

Online and blended learning: Contexts and conditions for education in an emergency

by Christine Greenhow, Cathy Lewin
British Journal of Educational Technology, Volume 52, Issue 4, Page 1301-1305, July 2021.
09 Jun 06:27

THE FUTURE OF WORKPLACE LEARNING IS DIGITAL

by Steven Wright
TThere is no doubt the Covid-19 pandemic has changed our lives in lots of ways, and whilst we can all clearly see plenty of downsides, there...8th Jun 2021
04 Jun 07:06

How to shape digital culture in higher education

by Sarah Dyer

A network of educators met in March sought to think through making digital culture change happen. Here are the results.

The post How to shape digital culture in higher education appeared first on Wonkhe.