talk-data.com talk-data.com

Topic

React

javascript_library front_end web_development

99

tagged

Activity Trend

9 peak/qtr
2020-Q1 2026-Q1

Activities

99 activities · Newest first

Learn D3.js - Second Edition

Master data visualization with D3.js v7 using modern web standards and real-world projects to build interactive charts, maps, and visual narratives Key Features Build dynamic, data-driven visualizations using D3.js v7 and ES2015+ Create bar, scatter, and network charts, geographic maps, and more Learn through step-by-step tutorials backed by hundreds of downloadable examples Purchase of the print or Kindle book includes a free PDF eBook Book Description Learn D3.js, Second Edition, is a fully updated guide to building interactive, standards-compliant web visualizations using D3.js v7 and modern JavaScript. Whether you're a developer, designer, data journalist, or analyst, this book will help you master the core techniques for transforming data into compelling, meaningful visuals. Starting with fundamentals like selections, data binding, and SVG, the book progressively covers scales, axes, animations, hierarchical data, and geographical maps. Each chapter includes short examples and a full hands-on project with downloadable code you can run, modify, and use in your own work. This new edition introduces improved chapter structure, updated code samples using ES2015 standards, and better formatting for readability. There’s also a dedicated chapter that focuses on integrating D3 with modern frameworks like React and Vue, along with performance, accessibility, and deployment strategies. For those migrating from older versions of D3, a detailed appendix is included at the end. With thoughtful pedagogy and a practical approach, this book remains one of the most thorough and respected resources for learning D3.js and help you truly leverage data visualisation. What you will learn Bind data to DOM elements and apply transitions and styles Build bar, line, pie, scatter, tree, and network charts Create animated, interactive behaviours with zoom, drag, and tooltips Visualize hierarchical data, flows, and maps using D3 layouts and projections Use D3 with HTML5 Canvas for high-performance rendering Develop accessible and responsive D3 apps for all screen sizes Integrate D3 with frameworks like React and Vue Migrate older D3 codebases to version 7 Who this book is for This book is for web developers, data journalists, designers, analysts, and anyone who wants to create interactive, web-based data visualizations. A basic understanding of HTML, CSS, and JavaScript is recommended. No prior knowledge of SVG or D3 is required.

Generative AI for Full-Stack Development: AI Empowered Accelerated Coding

Gain cutting-edge skills in building a full-stack web application with AI assistance. This book will guide you in creating your own travel application using React and Node.js, with MongoDB as the database, while emphasizing the use of Gen AI platforms like Perplexity.ai and Claude for quicker development and more accurate debugging. The book’s step-by-step approach will help you bridge the gap between traditional web development methods and modern AI-assisted techniques, making it both accessible and insightful. It provides valuable lessons on professional web application development practices. By focusing on a practical example, the book offers hands-on experience that mirrors real-world scenarios, equipping you with relevant and in-demand skills that can be easily transferred to other projects. The book emphasizes the principles of responsive design, teaching you how to create web applications that adapt seamlessly to different screen sizes and devices. This includes using fluid grids, media queries, and optimizing layouts for usability across various platforms. You will also learn how to design, manage, and query databases using MongoDB, ensuring you can effectively handle data storage and retrieval in your applications. Most significantly, the book will introduce you to generative AI tools and prompt engineering techniques that can accelerate coding and debugging processes. This modern approach will streamline development workflows and enhance productivity. By the end of this book, you will not only have learned how to create a complete web application from backend to frontend, along with database management, but you will also have gained invaluable associated skills such as using IDEs, version control, and deploying applications efficiently and effectively with AI. What You Will Learn How to build a full-stack web application from scratch How to use generative AI tools to enhance coding efficiency and streamline the development process How to create user-friendly interfaces that enhance the overall experience of your web applications How to design, manage, and query databases using MongoDB Who This Book Is For Frontend developers, backend developers, and full-stack developers.

AWS re:Invent 2025 - Architecting scalable and secure agentic AI with Bedrock AgentCore (AIM431)

Go deep into how AgentCore works under the hood. This technical deep-dive session breaks down the ReAct loop—how agents iteratively reason, plan, and perform tool calls to accomplish complex goals. Learn how context management, memory, and data grounding shape each reasoning step and response. Explore how AgentCore operationalizes this loop with modular services: Runtime for scalable execution, Gateway for dynamic tool and data access, Policy for deterministic controls, Observability for monitoring agent behavior, and Evaluations for continuous quality improvements. Understand how AgentCore’s architecture enables reliable, secure, and production-ready deployment of autonomous, data-driven AI agents.

Learn more: More AWS events: https://go.aws/3kss9CP

Subscribe: More AWS videos: http://bit.ly/2O3zS75 More AWS events videos: http://bit.ly/316g9t4

ABOUT AWS: Amazon Web Services (AWS) hosts events, both online and in-person, bringing the cloud computing community together to connect, collaborate, and learn from AWS experts. AWS is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.

AWSreInvent #AWSreInvent2025 #AWS

Real-TIme Context Engineering for Agents

Agents need timely and relevant context data to work effectively in an interactive environment. If an agent takes more than a few seconds to react to an action in a client applicatoin, users will not perceive it as intelligent - just laggy.

Real-time context engineering involves building real-time data pipelines to pre-process application data and serve relevant and timely context to agents. This talk will focus on how you can leverage application identifiers (user ID, session ID, article ID, order ID, etc) to identify which real-time context data to provide to agents. We will contrast this approach with the more traditional RAG approach of using vector indexes to retrieve chunks of relevent text using the user query. Our approach will necessitate the introduction of the Agent-to-Agent protocol, an emerging standard for defining APIs for agents.

We will also demonstrate how we provide real-time context data from applications inside Python agents using the Hopsworks feature store. We will walk through an example of an interactive application (TikTok clone).

Learn to build an autonomous data science agent from scratch using open-source models and modern AI tools. This hands-on workshop will guide you through implementing a ReAct-based agent that can perform end-to-end data analysis tasks, from data cleaning to model training, using natural language reasoning and Python code generation. We'll explore the CodeAct framework, where the agent "thinks" through problems and then generates executable Python code as actions. You'll discover how to safely execute AI-generated code using Together Code Interpreter, creating a modular and maintainable system that can handle complex analytical workflows. Perfect for data scientists, ML engineers, and developers interested in agentic AI, this workshop combines practical implementation with best practices for building reasoning-driven AI assistants. By the end, you'll have a working data science agent and understand the fundamentals of agent architecture design. What you'll learn: ReAct framework implementation Safe code execution in AI systems Agent evaluation and optimization techniques Building transparent, "hackable" AI agents No advanced AI background required, just familiarity with Python and data science concepts.

With over 50,000 active users, discover how we transformed enterprise data interaction through Snowflake's Cortex Analyst API with SiemensGPT. Our plugin architecture, powered by the ReACT agent model, converts natural language into SQL queries and dynamic visualizations, orchestrating everything through a unified interface. Beyond productivity gains, this solution democratizes data access across Siemens, enabling employees at all levels to derive business insights through simple conversations.

ActiveTigger: A Collaborative Text Annotation Research Tool for Computational Social Sciences

The exponential growth of textual data—ranging from social media posts and digital news archives to speech-to-text transcripts—has opened new frontiers for research in the social sciences. Tasks such as stance detection, topic classification, and information extraction have become increasingly common. At the same time, the rapid evolution of Natural Language Processing, especially pretrained language models and generative AI, has largely been led by the computer science community, often leaving a gap in accessibility for social scientists.

To address this, we initiated since 2023 the development of ActiveTigger, a lightweight, open-source Python application (with a web frontend in React) designed to accelerate annotation process and manage large-scale datasets through the integration of fine-tuned models. It aims to support computational social science for a large public both within and outside social sciences. Already used by a dynamic community in social sciences, the stable version is planned for early June 2025.

From a more technical prospect, the API is designed to manage the complete workflow from project creation, embeddings computation, exploration of the text corpus, human annotation with active learning, fine-tuning of pre-trained models (BERT-like), prediction on a larger corpus, and export. It also integrates LLM-as-a-service capabilities for prompt-based annotation and information extraction, offering a flexible approach for hybrid manual/automatic labeling. Accessible both with a web frontend and a Python client, ActiveTigger encourages customization and adaptation to specific research contexts and practices.

In this talk, we will delve into the motivations behind the creation of ActiveTigger, outline its technical architecture, and walk through its core functionalities. Drawing on several ongoing research projects within the Computational Social Science (CSS) group at CREST, we will illustrate concrete use cases where ActiveTigger has accelerated data annotation, enabled scalable workflows, and fostered collaborations. Beyond the technical demonstration, the talk will also open a broader reflection on the challenges and opportunities brought by generative AI in academic research—especially in terms of reliability, transparency, and methodological adaptation for qualitative and quantitative inquiries.

The repository of the project : https://github.com/emilienschultz/activetigger/

The development of this software is funded by the DRARI Ile-de-France and supported by Progédo.

Real-Time Context Engineering for LLMs

Context engineering has replaced prompt engineering as the main challenge in building agents and LLM applications. Context engineering involves providing LLMs with relevant and timely context data from various data sources, which allows them to make context-aware decisions. The context data provided to the LLM must be produced in real-time to enable it to react intelligently at human perceivable latencies (a second or two at most). If the application takes longer to react, humans would perceive it as laggy and unintelligent. In this talk, we will introduce context engineering and motivate for real-time context engineering for interactive applications. We will also demonstrate how to integrate real-time context data from applications inside Python agents using the Hopsworks feature store and corresponding application IDs. Application IDs are the key to unlock application context data for agents and LLMs. We will walk through an example of an interactive application (TikTok clone) that we make AI-enabled with Hopsworks.

In this talk I’ll share what I’ve learned as the sole maintainer of the package "react-currency-input-field". What started as a small weekend side project, now has over 1M+ monthly downloads on npm. I'll go through some of the lessons I've learnt as it has grown: triaging issues, evaluating feature requests, and answer the important questions: How much money do I actually make from it? And does it help in job interviews?

Building an AI Agent for Natural Language to SQL Query Execution on Live Databases

This hands-on tutorial will guide participants through building an end-to-end AI agent that translates natural language questions into SQL queries, validates and executes them on live databases, and returns accurate responses. Participants will build a system that intelligently routes between a specialized SQL agent and a ReAct chat agent, implementing RAG for query similarity matching, comprehensive safety validation, and human-in-the-loop confirmation. By the end of this session, attendees will have created a powerful and extensible system they can adapt to their own data sources.

This hands-on tutorial will guide participants through building an end-to-end AI agent that translates natural language questions into SQL queries, validates and executes them on live databases, and returns accurate responses. Participants will build a system that intelligently routes between a specialized SQL agent and a ReAct chat agent, implementing RAG for query similarity matching, comprehensive safety validation, and human-in-the-loop confirmation. By the end of this 4-hour session, attendees will have created a powerful and extensible system they can adapt to their own data sources.

In Airflow 2 there was a plugin mechanism to extend the UI for new functions as well as be able to add hooks and other features. As Airflow 3 rewrote the UI old Plugins were not working for all cases anymore. Airflow 3.1 now provides a re-vamped option to extend the UI with a new plugin schema in native React components and embedded iframes following AIP-68 definitions. In this session we will provide an overview about capabilities and give some intro how you can roll-your-own.

Airflow 3.0 is the most significant release in the project’s history, and brings a better user experience, stronger security, and the ability to run tasks anywhere, at any time. In this workshop, you’ll get hands-on experience with the new release and learn how to leverage new features like DAG versioning, backfills, data assets, and a new react-based UI. Whether you’re writing traditional ELT/ETL pipelines or complex ML and GenAI workflows, you’ll learn how Airflow 3 will make your day-to-day work smoother and your pipelines even more flexible. This workshop is suitable for intermediate to advanced Airflow users. Beginning users should consider taking the Airflow fundamentals course on the Astronomer Academy before attending this workshop.