𝗪𝗿𝗶𝘁𝗶𝗻𝗴 𝘂𝘀𝗲𝗱 𝘁𝗼 𝗯𝗲 𝘁𝗵𝗲 𝘂𝗹𝘁𝗶𝗺𝗮𝘁𝗲 𝗯𝗮𝘀𝗲𝗹𝗶𝗻𝗲 𝗦𝗸𝗶𝗹𝗹. 𝗡𝗼𝘄, 𝗗𝗮𝘁𝗮 𝗟𝗶𝘁𝗲𝗿𝗮𝗰𝘆 𝘀𝗵𝗮𝗿𝗲𝘀 𝘁𝗵𝗲 𝘁𝗼𝗽 𝘀𝗽𝗼𝘁 ! Is Data Literacy the new Writing? The data says yes. ✍️ If you asked leaders 10 years ago what the most critical day-to-day skill was, the answer was almost universally "communication and writing." Fast forward to today, and the landscape has completely transformed. Data is no longer a niche skill for analysts; it is the new baseline language of business. 📈 A massive 88% of leaders now rate basic data literacy as "important" or "very important" for day-to-day tasks. ⚖️ This officially puts data literacy on par with, and even slightly ahead of, our most trusted foundational skills, including writing (86%), project management (83%), and delivering presentations (81%). 🚨 60% of leaders surveyed admit their organizations currently have internal skill gaps when it comes to AI and data. They warn that this lack of literacy directly leads to slower rates of innovation, poor decision-making, and reduced competitiveness, according to a new interesting research published by DataCamp using data from a survey of 517 US and UK business leaders conducted in partnership with YouGov . ☝️ 𝙈𝙮 𝙥𝙚𝙧𝙨𝙤𝙣𝙖𝙡 𝙫𝙞𝙚𝙬: When I look at these new findings, my mind immediately goes beyond the corporate boardroom and straight into our classrooms. For generations, our education system has been built on a core foundation: reading and writing. We spend over a decade teaching children how to craft the perfect essay, structure their arguments, and communicate clearly. But if data is truly the new language of the modern world, our school curriculums are drastically out of date. We can't wait until people enter the workforce to teach them how to read a chart, spot a statistical bias, or interpret a dataset. If data literacy is now exactly as critical as writing for professionals, we must start teaching it to our kids with the exact same urgency. It is time to add Data to the ABCs... 🙏 Thank you DataCamp researchers team for these insightful findings: Jonathan Cornelissen 🔑Are we training our teams for this new reality, or are we still treating data like a niche technical skill? #DataLiteracy #FutureOfWork
Educational Data Analysis
Explore top LinkedIn content from expert professionals.
-
-
This week's theme in my workshops (and, by that extension, my posts to you here) is – assessing data collection tools (like surveys) for inclusion and access. Most of my workshops start at the same place – where most have designed at least one survey in the current/past job/education. And then it takes three hours and some meaningful collective learning to realize that planning a survey is much more than just a list of questions. It is an opportunity to connect with your community directly, hear their stories, and understand their experiences and expressions of engagement. In this post, I want to share 5 "red flag" behaviors I often see during a survey design phase: ● When the only questions included are of positive feedback. We all love hearing good things, but only asking for positive feedback disables some real growth opportunities. Example: A question like, "What did you love most about our event?" assumes your respondent only loves the event, and then it offers no room for any different experience. ● When questions are overloaded with complicated words or jargon that only a few will know. You know your mission inside and out, but your community might not understand the same terms you do. Speak in their language. Think of your survey as a conversation. Example: A question like, "How would you rate the efficacy of our donor stewardship activities?" assumes everyone understands the details of "stewardship". ● When every possible question about every possible aspect of the mission is asked – because "why not". Designing surveys – without context – that go on for more than 10-12 minutes - can feel like asking for too much. Be mindful of the respondents and the needs of the data collection. Every question should have a purpose. ● When questions contradict anonymity. Our communities are diverse, and our surveys should hold a neat, safe space for those communities. Ensuring accessibility – balanced with truly useful demographic questions means not harming someone's anonymity – thus making the experience of collecting data easier and meaningful. Example: A survey asking about racial and ethnic diversity in a group of 99% homogenous population (thus making the 1% racially diverse population nervous about the possible breach of anonymity). ● When questions do not offer an 'Opt-Out' option by making everything required. Some questions may feel too personal or uncomfortable for individuals to respond to, and our surveys must create space for that. Give respondents the space to skip a question if they need to. Example: A survey that requires donors to disclose their income range without offering a way to skip the question if they're uncomfortable sharing that information. Stay tuned for a soon-to-be post on what we can do differently then. Have any other such behaviors? Share them here. In the meantime, try some of these resources (all designed to do good with data): https://lnkd.in/gUK-6M_Y #nonprofits #community
-
You ran the data meeting on Friday. Everyone nodded. Nothing changed on Monday. Here's what really happened. Data was collected. The team discussed the data. But nobody decided 𝙝𝙤𝙬 𝙩𝙤 𝙩𝙚𝙖𝙘𝙝 𝙙𝙞𝙛𝙛𝙚𝙧𝙚𝙣𝙩𝙡𝙮. Here's the problem: we've confused 𝘤𝘰𝘭𝘭𝘦𝘤𝘵𝘪𝘯𝘨 data with 𝘶𝘴𝘪𝘯𝘨 it. Data without a clear instructional response isn't a system. It's a filing cabinet. So what does acting on data actually look like? After your next assessment, before your data meeting, ask your team one question: "𝗕𝗮𝘀𝗲𝗱 𝗼𝗻 𝘁𝗵𝗶𝘀 𝗱𝗮𝘁𝗮, 𝘄𝗵𝗮𝘁 𝗮𝗿𝗲 𝘄𝗲 𝗳𝗼𝗰𝘂𝘀𝗶𝗻𝗴 𝗼𝗻 𝗮𝗻𝗱 𝗵𝗼𝘄 𝗮𝗿𝗲 𝘄𝗲 𝘁𝗲𝗮𝗰𝗵𝗶𝗻𝗴 𝗶𝘁 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁𝗹𝘆 𝗻𝗲𝘅𝘁 𝘁𝗶𝗺𝗲?" Not re-teaching the same lesson. Not moving on and hoping it clicks. 𝗛𝗼𝘄 𝗮𝗿𝗲 𝘄𝗲 𝗮𝗽𝗽𝗿𝗼𝗮𝗰𝗵𝗶𝗻𝗴 𝗶𝘁 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁𝗹𝘆? Here's a simple three-step protocol to make that question actionable: 𝗦𝘁𝗲𝗽 𝟭: 𝗡𝗮𝗺𝗲 𝘁𝗵𝗲 𝗺𝗶𝘀𝗰𝗼𝗻𝗰𝗲𝗽𝘁𝗶𝗼𝗻, 𝗻𝗼𝘁 𝗷𝘂𝘀𝘁 𝘁𝗵𝗲 𝗺𝗶𝘀𝘁𝗮𝗸𝗲. Don't stop at "students got question 4 wrong." Ask why. Was it a procedural error? A conceptual gap? A language barrier? The misconception tells you how to respond. The mistake only tells you something went wrong. 𝗦𝘁𝗲𝗽 𝟮: 𝗠𝗮𝘁𝗰𝗵 𝘁𝗵𝗲 𝗶𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝗶𝗼𝗻𝗮𝗹 𝗺𝗼𝘃𝗲 𝘁𝗼 𝘁𝗵𝗲 𝗺𝗶𝘀𝗰𝗼𝗻𝗰𝗲𝗽𝘁𝗶𝗼𝗻. If students have a conceptual gap, teachers should use the CRA model (Concrete, Representational, Abstract) as a guide. Start with manipulatives or real-world context, move to visuals, then rebuild the abstract. If it's procedural, slow down the steps and make student thinking as visible as possible. The response has to match the root cause, not just re-cover the content. 𝗦𝘁𝗲𝗽 𝟯: 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗲 𝗮𝗻𝗱 𝗮𝘀𝘀𝗶𝗴𝗻 𝗼𝘄𝗻𝗲𝗿𝘀𝗵𝗶𝗽 𝗯𝗲𝗳𝗼𝗿𝗲 𝗹𝗲𝗮𝘃𝗶𝗻𝗴 𝘁𝗵𝗲 𝗿𝗼𝗼𝗺. Every instructional response needs a name attached to it. Who is trying what, in which class, by when and what does that instruction actually look like? Without ownership, the plan dies in the meeting. 𝗗𝗮𝘁𝗮 𝗺𝗲𝗲𝘁𝗶𝗻𝗴𝘀 𝘀𝗵𝗼𝘂𝗹𝗱 𝗲𝗻𝗱 𝘄𝗶𝘁𝗵 𝗮 𝘁𝗲𝗮𝗰𝗵𝗶𝗻𝗴 𝗽𝗹𝗮𝗻, 𝗻𝗼𝘁 𝗷𝘂𝘀𝘁 𝗮 𝘁𝗮𝗹𝗸𝗶𝗻𝗴 𝗽𝗼𝗶𝗻𝘁. ♻️ If this idea resonates, repost to help school leaders and math teams turn data into action, not just conversation. 📧 If you're interested in more practical strategies like this, I'm launching a new newsletter called The 3-1-4, where I share practical strategies for improving math instruction and leadership. The first issue goes out on Pi Day (March 14). Link in the comments. _______________________________ Hi, I'm Dwight Williams. A proud first-gen everything, and I help schools and districts strengthen math instruction through coaching, curriculum support, and data-informed systems that drive student confidence and achievement. 👍🏿 Like | 🔔 Follow | 💬 Comment | 🔁 Repost
-
✋ Stop. If you’re using AI in your classroom with students, you need to read this before next week. Most of us worry model training, data security and oversight when we put AI infront of students. What we don’t expect is that the tool itself might be quietly passing children’s device and browsing data into commercial advertising systems. That was the gut-punch when I read "A Child Rights Audit of GenAI in EdTech". The audit was carried out under UK GDPR and the Age Appropriate Design Code (AADC). It showed that student-facing AI tools were setting advertising and analytics cookies by default, nudging children into “accept all,” and in some cases still firing trackers even after “reject all.” The problem is that children’s data is being profiled and fed into commercial systems they should be protected from by law. The AADC spells that out by requiring: -Privacy settings at the highest level by default -No nudges or “dark patterns” that push children into accepting more tracking -Clear, child-friendly notices about how their data is used Reading the audit made me act straight away. The next morning I tested the tool I use with my students. I was prepared to stop using it immediately if I saw the same problems. To my relief, it did not load advertising cookies at all. But when I followed the same steps described in the audit using Ghostery, I found that most other student-facing deployments I checked had commercial cookies firing by default. In some cases they still fired even after I clicked “reject all.” A few tools I checked had no “reject all” button at all. I am not naming names in public. But where I have a connection, I have reached out directly to share the report and my own notes. I think vendors need to do better, and they deserve the chance to fix this. 💡 If you want to check the tools you use: -Install Ghostery (it is free) -In your browser’s extension settings, allow Ghostery to run in incognito/private mode -Set up a student room or equivalent in your tool -Copy the join link -Paste it into an incognito tab -Watch Ghostery with its blocking switched off so you see the raw results 🔴 These are the things that should make you pause: -Trackers loading the moment the page opens, before anyone has clicked anything -You hit “reject all” but the trackers keep firing anyway -No option to reject cookies in the first place -Pop-ups or design tricks nudging you toward “accept all” If you see any of that, you’re looking at practices that clash with children’s rights protections. Children have the right to learn in digital spaces that protect their privacy and keep them out of commercial advertising networks. That is not optional. It is a legal requirement. Picking the right tools is key in education! All of the screenshot are after I pressed "Reject All" in the student access (if I had the option) Gemma Gwilliam Emma Darcy
-
Last week, a digital transformation leader at a major EU educational organization contacted me, concerned. Their entire staff had been told by a visiting “AI literacy” speaker that it was perfectly fine to upload student work into ChatGPT or Gemini for grading, as long as it was “anonymized.” They asked me: Is this correct? The answer is simple: No. You cannot simply strip names from student work and upload it to a large language model. This is a dangerous misconception. Why? Because AI systems are not the same as Word or Google Docs. The way GDPR and the EU AI Act apply to generative AI is profoundly different from traditional digital tools. Yet this was the official takeaway given to hundreds of staff. You can imagine my frustration. Organizations need to carefully vet the expertise of anyone they bring in to train staff on AI. 'Early' 2023 AI adoption, a large follower count, and a few self-published books are not proof of experience, deep technical competence, or governance fluency. In fact, the wrong advice can expose your institution to major harm, compliance, ethical, and reputational risks. So what does need to be in place before you let a large language model process student or employee work in Europe? At a minimum: 🔹 A data protection impact assessment (DPIA) addressing AI-specific risks 🔹 A clear legal basis for processing under GDPR (consent is rarely sufficient) 🔹 Contracts with providers that establish data use, retention, and security 🔹 Governance processes aligned with the EU AI Act , GDPR, and sector-specific safeguards 🔹 Human oversight mechanisms to prevent bias, error, or misuse Only then can AI be used to analyze, grade, or process human work. To support schools and education organizations, I’ve created a staff briefing note and a free reference sheet that outlines these requirements in plain language. This cheat sheet is written for the EU and UK, but other nations should take note, because similar regulation is already in place for you, or on the way. You’ll find it attached here. We need to move beyond “AI literacy” as a buzzword and toward AI responsibility as a practice. The future of education, and the trust of students, parents, and staff depends on it. Do you need support on this? Our team at Kompass Education can guide you through. Contact us at email: info@kompass.education Let AI governance be your North Star. #AIGovernance #AIinEducation #AICompliance #EdTech #DigitalSafety
-
Designing effective surveys is not just about asking questions. It is about understanding how people think, remember, decide, and respond. Cognitive science offers powerful models that help researchers structure surveys in ways that align with mental processes. The foundational work by Tourangeau and colleagues provides a four-stage model of the survey response process: comprehension, retrieval, judgment, and response selection. Each step introduces potential for cognitive error, especially when questions are ambiguous or memory is taxed. The CASM model -Cognitive Aspects of Survey Methodology- builds on this by treating survey responses as cognitive tasks. It incorporates working memory limits, motivational factors, and heuristics, emphasizing that poorly designed surveys increase error due to cognitive overload. Designers must recognize that the brain is a limited system and build accordingly Dual-process theory adds another important layer. People shift between fast, automatic responses (System 1) and slower, more effortful reasoning (System 2). Whether a user relies on one or the other depends heavily on question complexity, scale design, and contextual framing. Higher cognitive load often pushes users into heuristic-driven responses, undermining validity. The Elaboration Likelihood Model explains how people process survey content: either centrally (focused on argument quality) or peripherally (relying on surface cues). Users may answer based on the wording of the question, the branding of the survey, or even the visual aesthetics rather than the actual content unless design intentionally promotes central processing. Cognitive Load Theory offers tools for managing effort during survey completion. It distinguishes intrinsic load (task difficulty), extraneous load (poor design), and germane load (productive effort). Reducing the unnecessary load enhances both data quality and engagement. Attention models and eye-tracking reveal how layout and visual hierarchy shape where users focus or disengage. Surveys must guide attention without overwhelming it. Similarly, the models of satisficing vs. optimizing explain when people give thoughtful responses and when they default to good-enough answers because of fatigue, time pressure, or poor UX. Satisficing increases sharply in long, cognitively demanding surveys. The heuristics and biases framework from cognitive psychology rounds out this picture. Respondents fall prey to anchoring effects, recency bias, confirmation bias, and more. These are not user errors, but expected outcomes of how cognition operates. Addressing them through randomized response order and balanced framing reduces systematic error. Finally, modeling approaches like like cognitive interviewing, drift diffusion models, and item response theory allow researchers to identify hesitation points, weak items, and response biases. These tools refine and validate surveys far beyond surface-level fixes.
-
Children's information and sharing it is top of mind for regulators, as we have been telling our clients for a while, and as we saw yesterday in a new CA AG $500,000 settlement with Tilting Point Media LLC (Tilting Point) for #CCPA and #COPPA compliance issues in mobile app game “SpongeBob: Krusty Cook-Off.” Practice points: Directed at children: 🔹 If you are aware that children under 13 are using your services - they are is directed to children. Saying in your terms of service and privacy policy that consumers under 13 are not authorized to use it - doesn't change this. Regulator 1, 2, 3: 🔹 CA AG will use every enforcement tool to ensure compliance with the law and that companies exercise diligence with privacy law requirements 🔹 If one regulator tells you that you are not compliant (here BBB National Programs CARU): assess your compliance with other laws you could be enforced against by another regulator Data minimization: 🔹 Don't collect more personal information than reasonably necessary for a child to participate. Mind your SDKs: 🔹An SDK facilitates data sharing that can be a sale (CCPA) and/or unfair/deceptive (FTC) and/or subject to COPPA just like any data sharing. 🔹 You need to know: what information each SDK collects; evaluate contracts re: sharing of data through them - making sure you have the right consent. 🔹 You may need a formal SDK governance framework. 🔹 Every year: assess data minimization and SDK usage. (ensuring data flows appropriately change based on the consumer's age). 🔹 Every year: conduct adequate training for personnel re sharing and SDKs Sale/share: 🔹 Disclose your sale and share correctly in your privacy notice 🔹 Don't sell/share personal information of under 13's without parental consent 🔹When you do sell/share: provide a just-in-time notice explaining what information is collected, the purpose, sale/share, link to privacy policy, & parental or opt-in consent required. [FTC also says this in BetterHelp] Mixed audience 🔹When using an age screen it has to be neutral. 🔹Neutral means: (1) ask age information in a neutral manner that does not default to a set age of 16 or above or encourage users to falsify age information; (2) not suggest that certain features will not be available; and (3) provide CLEAR AND CONSPICUOUS notice that the age entered should be accurate to the user and is collected to ensure data use and advertising is appropriate. 🔹If the person is under 13 or 16 - direct them to a portion of the service that doesn't use data other than as permitted by COPPA/CCPA or get parental / opt in consent For ads in your apps, make sure they are: 🔹Identified as being an ad; 🔹Include a prominent one-click “X” or “Close” button; 🔹Do not manipulate or deceive consumers into engaging 🔹Do not advertise activities/products in which children cannot legally engage/possess. #dataprivacy #dataprotection #privacyFOMO Complaint: https://rb.gy/enu19e Agreement: https://rb.gy/jq6lke
-
October comes next week, and so do new privacy requirements in three states. Here's a recap and what to check ⤵️ 1️⃣ Colorado Privacy Act amendments related to minors' personal data will: 🔸impose obligations where a controller knows or willfully disregards that a user is a minor; 🔸require opt-in consent to sell or use a minor's personal data for targeted advertising, or to use system design features to increase engagement; 🔸limit how precise geolocation data of minors can be processed; and 🔸mandate data protection assessments in additional contexts. Rulemaking is underway to provide further clarity on these new requirements, including to specify when a data controller "willfully disregards" that a user is a minor and what system design features increase engagement. See the draft regulations here: https://lnkd.in/gcBtzyTi 2️⃣ Montana privacy law amendments that: 🔸lower the law's threshold for applicability; 🔸remove the general non-profit exemption; 🔸add privacy policy content requirements; 🔸require sale and targeted advertising opt-out links outside the privacy policy; and 🔸remove the right to cure violations. 3️⃣Maryland's Online Data Privacy Act takes effect. It has a low bar for applicability, and unique or less common requirements like: 🔸prohibiting processing of sensitive personal data unless it is strictly necessary to provide or maintain a consumer-requested product or service; 🔸forbidding collection of personal data unless it is reasonably necessary and proportionate to provide or maintain a consumer-requested product or service; 🔸banning sales of personal data of minors, and processing of their personal data for #TargetedAdvertising; 🔸broad data deletion right unless retention is required by law (though other provisions may give some flexibility); 🔸privacy policy requirements including to disclose the type of, business model of, or processing conducted by each third party to which personal data is disclosed; and 🔸consumer health data requirements. If you haven't already, identify which of these laws apply to your organization, and see if your current privacy practices address what's required. Consider especially: ✔️ How your organization identifies accounts, profiles, and personal data of minors, and treats them in line with Colorado's, Maryland's, and other states' increasingly complex requirements 💡 Validate that there are processes to address parental reports, app store provided age information, and other reports and signals that a data subject is a minor; ✔️ Data collection and use limits to address Maryland's strict data minimization requirements, particularly for sensitive personal data 💡 Updates may be appropriate in #privacy impact assessment processes, organizational policies, and organizational privacy training; ✔️ Confirming your organization's privacy policy has the third party details required under the Maryland law.
-
📊 How can we use data science to truly improve schools? For over 50 years, education leaders have been urged to leverage data for decision-making. Yet despite massive investments in dashboards and analytics systems, research shows that the link between data use and actual improvements in student outcomes is often weak. In my new paper, “Data Science in Education Administration, Policy, and Practice”, I argue that education data science should be understood as a third core methodology in education research, alongside quantitative and qualitative traditions. Open Access Preprint: https://lnkd.in/eKYTr3i3 Key insights: 🔹 Beyond dashboards: Data science is more than reporting — it involves machine learning, visualization, and exploratory data analysis to support evidence-based improvement cycles. 🔹 Prediction matters: School leaders need accurate predictions, not just statistical model fit. Accuracy should stand alongside theory in informing decisions. 🔹 Algorithms in education must be Accurate, Accessible, Actionable, and Accountable (the “4As”). 🔹 Capacity building: We need to train educational data scientists who can both analyze data and communicate findings to policymakers, teachers, and communities. In effect, we must train people who can talk to people and talk to machines. 👉 The goal is not to replace theory, but to balance explanation with prediction — and to center human judgment, ethics, and collaboration in the process. 🔑 Key Takeaways for the Field For Practice: Schools and districts should embed data science partnerships — not just dashboards — into leadership and improvement cycles. Joint sensemaking between analysts and leaders is essential. For Research: We must expand beyond model fitting to systematically test prediction accuracy and build open, reproducible workflows that connect theory, and application. For Training: Graduate programs in education leadership and policy need roadmaps for education data science capacity building — equipping future leaders to understand, question, and apply advanced analytics responsibly. A key practice for training from Data Science is the Common Task Framework which focuses on: (a) open large-scale real-world deidentified datasets, (b) a shared culture of shared code for shared research, (c) public and open evaluation of algorithms. I’d love to hear from colleagues! Let me know what you think! Open Access Preprint: https://lnkd.in/eKYTr3i3 #EducationResearch #DataScience #EducationPolicy #SchoolLeadership #LearningAnalytics #EdTech
-
🇮🇪 The Data Protection Commission (DPC, Ireland) has published a “Data Protection Toolkit for Schools” (”toolkit”), a new resource dedicated to further assisting schools in meeting their data protection obligations when processing the personal data of children. The toolkit covers the following: 1. A detailed guidance piece on different aspects of data protection law in the specific context of schools 2. An FAQ section containing answers to questions commonly received by the DPC from the education sector 3. An appendix containing three helpful resources for schools, namely: - A sample template for Data Protection Impact Assessments (DPIAs) - An infographic on what information to include in a Privacy Policy - A “checklist” for schools on how to respond to a Subject Access Request (SAR) #privacy #europe #ireland #gdpr #children #dataprotection #dpia
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development