📊 S&P 500: 5,804.16 pts (+33.44 | +0.58%) 💻 NASDAQ: 18,740.82 pts (+133.17 | +0.72%) 🏭 DOW JONES: 41,612.99 pts (+217.93 | +0.53%) 🇪🇸 IBEX 35: 14,081.30 pts (-205.60 | -1.44%) ₿ BITCOIN: $107,482.38 (-307.56 | -0.29%) Ξ ETHEREUM: $2,490.93 (-39.20 | -1.55%) 📉 Tendencia: bajista (S&P 500 -2.70% 5d)

Tecnología

7 películas que cambiaron para siempre sus respectivos géneros

Hipertextual Sat, 24 May 2025 16:00:00 +0000
7 películas que cambiaron para siempre sus respectivos géneros
Imagen 1

Hasta 1960, el público norteamericano podía entrar en una función de cine después del comienzo de la proyección. Algo que provocaba incomodidad y tropiezos, además de fomentar la costumbre de la mayoría de las veces, perder los primeros minutos de la historia. Así que cuando el cartel publicitario de Psicosis de Alfred Hitchcock exigió llegar […]

Seguir leyendo: 7 películas que cambiaron para siempre sus respectivos géneros

Leer más

Los obstáculos que debe superar ‘The Last of Us’ de aquí en adelante

Hipertextual Sat, 24 May 2025 13:00:00 +0000
Los obstáculos que debe superar ‘The Last of Us’ de aquí en adelante
Imagen 1
The Last of Us

The Last of Us atraviesa momentos complicados. Luego de una primera temporada que conquistó a los fanáticos y a la crítica, la segunda entrega de la producción, enfrentó varios problemas. El más evidente, que la adaptación del clásico de Naughty Dog, debía avanzar por el terreno complicado de cambiar de manera sustancial el material original. […]

Seguir leyendo: Los obstáculos que debe superar ‘The Last of Us’ de aquí en adelante

Leer más

La historia de la misteriosa mangaka Ryo Tatsuki y su obra “profética” que anticipa un gran desastre para 2025

Hipertextual Sat, 24 May 2025 10:00:00 +0000
La historia de la misteriosa mangaka Ryo Tatsuki y su obra “profética” que anticipa un gran desastre para 2025
Imagen 1

Durante las últimas semanas, el nombre de la mangaka Ryo Tatsuki, se convirtió en un extraño fenómeno de Internet. Todo debido a que su manga The Future I Saw, autopublicado en 1999, contiene una serie de predicciones que parecen haberse cumplido en la última década. Más alarmante todavía: la obra indica otro anuncio terrorífico, que […]

Seguir leyendo: La historia de la misteriosa mangaka Ryo Tatsuki y su obra “profética” que anticipa un gran desastre para 2025

Leer más

3 razones para ver ‘La calle del terror: La reina del baile’, el nuevo éxito Netflix

Hipertextual Sat, 24 May 2025 08:00:00 +0000
3 razones para ver ‘La calle del terror: La reina del baile’, el nuevo éxito Netflix
Imagen 1

La calle del terror: La reina del baile (2025), es un homenaje interesante, salvaje y sangriento al slasher. Y de eso no queda duda, desde su primera escena. Pero esta adaptación del libro homónimo de R.L. Stine de 1992, explora también en lo que ha hecho a la obra del escritor tan querida. A saber: […]

Seguir leyendo: 3 razones para ver ‘La calle del terror: La reina del baile’, el nuevo éxito Netflix

Leer más

OpenAI actualiza Operator, el poderoso agente de IA que navega la web por ti

Hipertextual Fri, 23 May 2025 23:22:25 +0000
OpenAI actualiza Operator, el poderoso agente de IA que navega la web por ti
Imagen 1
Operator agente OpenAI

OpenAI ha actualizado su agente de IA que puede navegar en la web por ti. La compañía anunció que Operator utilizará o3, el poderoso modelo de razonamiento que ha sido diseñado para realizar investigaciones exhaustivas en internet. Anteriormente, el agente estaba basado en una versión personalizada de GPT-4o. "Estamos reemplazando el modelo existente basado en […]

Seguir leyendo: OpenAI actualiza Operator, el poderoso agente de IA que navega la web por ti

Leer más

Programación

What Zen And The Art Of Motorcycle Maintenance Can Teach Us About Web Design

Smashing Magazine Fri, 23 May 2025 11:00:00 GMT
What Zen And The Art Of Motorcycle Maintenance Can Teach Us About Web Design
Imagen 1Imagen 2
Road-tripping along the line between engineering and spirituality, Robert M. Pirsig’s musings on the arts, sciences, and Quality ring as true now as they ever have.
Leer más

Smashing Animations Part 3: SMIL’s Not Dead Baby, SMIL’s Not Dead

Smashing Magazine Wed, 21 May 2025 08:00:00 GMT
Smashing Animations Part 3: SMIL’s Not Dead Baby, SMIL’s Not Dead
Imagen 1Imagen 2
While there are plenty of ways that CSS animations can bring designs to life, adding simple SMIL (Synchronized Multimedia Integration Language) animations in SVG can help them do much more. Andy Clarke explains where SMIL animations in SVG take over where CSS leaves off.
Leer más

Design System In 90 Days

Smashing Magazine Mon, 19 May 2025 10:00:00 GMT
Design System In 90 Days
Imagen 1Imagen 2
Helpful PDF worksheets and tools to get the design system effort up and running — and adopted! Kindly powered by How To Measure UX and Design Impact, a friendly course on how to show the impact of your incredible UX work on business.
Leer más

Building A Practical UX Strategy Framework

Smashing Magazine Fri, 16 May 2025 11:00:00 GMT
Building A Practical UX Strategy Framework
Imagen 1Imagen 2
Learn how to create and implement a UX strategy framework that shapes work and drives real business value.
Leer más

Fewer Ideas: An Unconventional Approach To Creativity

Smashing Magazine Thu, 15 May 2025 10:00:00 GMT
Fewer Ideas: An Unconventional Approach To Creativity
Imagen 1Imagen 2
Remember that last team brainstorming session where you were supposed to generate a long list of brilliant ideas? How many of those ideas actually stuck? Did leadership act on any of those ideas? In this article, Eric Olive challenges the value of exercises like brainstorming and explores more effective methods for sparking creativity to improve design and enhance the user’s experience.
Leer más

Inteligencia Artificial

What's new in TensorFlow 2.19

TensorFlow Blog 2025-03-13T09:00:00.000-07:00
What's new in TensorFlow 2.19
Imagen 1Imagen 2
Posted by the TensorFlow team

TensorFlow 2.19 has been released! Highlights of this release include changes to the C++ API in LiteRT, bfloat16 support for tflite casting, discontinue of releasing libtensorflow packages. Learn more by reading the full release notes.

Note: Release updates on the new multi-backend Keras will be published on keras.io, starting with Keras 3.0. For more information, please see https://keras.io/keras_3/.

TensorFlow Core

LiteRT

The public constants tflite::Interpreter:kTensorsReservedCapacity and tflite::Interpreter:kTensorsCapacityHeadroom are now const references, rather than constexpr compile-time constants. (This is to enable better API compatibility for TFLite in Play services while preserving the implementation flexibility to change the values of these constants in the future.)

TF-Lite

tfl.Cast op is now supporting bfloat16 in the runtime kernel. tf.lite.Interpreter gives a deprecation warning redirecting to its new location at ai_edge_litert.interpreter, as the API tf.lite.Interpreter will be deleted in TF 2.20. See the migration guide for details.

Libtensorflow

We have stopped publishing libtensorflow packages but it can still be unpacked from the PyPI package.

Leer más

Introducing Wake Vision: A High-Quality, Large-Scale Dataset for TinyML Computer Vision Applications

TensorFlow Blog 2024-12-05T09:00:00.000-08:00
Introducing Wake Vision: A High-Quality, Large-Scale Dataset for TinyML Computer Vision Applications
Imagen 1Imagen 2
Posted by Colby Banbury, Emil Njor, Andrea Mattia Garavagno, Vijay Janapa Reddi – Harvard University

TinyML is an exciting frontier in machine learning, enabling models to run on extremely low-power devices such as microcontrollers and edge devices. However, the growth of this field has been stifled by a lack of tailored large and high-quality datasets. That's where Wake Vision comes in—a new dataset designed to accelerate research and development in TinyML.

A vibrant, abstract representation of a human figure is formed by swirling lines and dots of rainbow colors. A large, bright blue eye is centrally located on the figure's torso.

Why TinyML Needs Better Data

The development of TinyML requires compact and efficient models, often only a few hundred kilobytes in size. The applications targeted by standard machine learning datasets, like ImageNet, are not well-suited for these highly constrained models.

Existing datasets for TinyML, like Visual Wake Words (VWW), have laid the groundwork for progress in the field. However, their smaller size and inherent limitations pose challenges for training production-grade models. Wake Vision builds upon this foundation by providing a large, diverse, and high-quality dataset specifically tailored for person detection—the cornerstone vision task for TinyML.

What Makes Wake Vision Different?

A table displaying the number of images used for training, validation, and testing different datasets, including Wake Vision, Visual Wake Words, CIFAR-100, and PASCAL VOC 2012. The table shows the total number of images and the number of person images in each dataset split.

Wake Vision is a new, large-scale dataset with roughly 6 million images, almost 100 times larger than VWW, the previous state-of-the-art dataset for person detection in TinyML. The dataset provides two distinct training sets:

  • Wake Vision (Large): Prioritizes dataset size.
  • Wake Vision (Quality): Prioritizes label quality.

Wake Vision's comprehensive filtering and labeling process significantly enhances the dataset's quality.

Why Data Quality Matters for TinyML Models

In traditional overparameterized models, it is widely believed that data quantity matters more than data quality, as an overparameterized model can adapt to errors in the training data. But according to the image below, TinyML tells a different story:

Five line graphs illustrate the Wake Vision Test Score with varying percentages of training data quality used, comparing models by parameter count (78K, 309K, 1.2M, 4.9M, and 11M) and  error rate (7%, 15%, and 30%).

The figure above shows that high-quality labels (less error) are more beneficial for under-parameterized models than simply having more data. Larger, error-prone datasets can still be valuable when paired with fine-grained techniques.

By providing two versions of the training set, Wake Vision enables researchers to explore the balance between dataset size and quality effectively.

Real-World Testing: Wake Vision's Fine-Grained Benchmarks

Five images are shown, each with text underneath describing the content as Perceived Older Person, Near Person, Bright Image, Perceived Female Person, and Depicted Person.

Unlike many open-source datasets, Wake Vision offers fine-grained benchmarks and detailed tests for real-world applications like those shown in the above figure. These enable the evaluation of model performance in real-world scenarios, such as:

  • Distance: How well the model detects people at various distances from the camera.
  • Lighting Conditions: Performance in well-lit vs. poorly-lit environments.
  • Depictions: Handling of varied representations of people (e.g., drawings, sculptures).
  • Perceived Gender and Age: Detecting biases across genders and age groups.

These benchmarks give researchers a nuanced understanding of model performance in specific, real-world contexts and help identify potential biases and limitations early in the design phase.

Key Performance Gains With Wake Vision

The performance gains achieved using Wake Vision are impressive:

  • Up to a 6.6% increase in accuracy over the established VWW dataset.
  • Error rate reduction from 7.8% to 2.2% with manual label validation on evaluation sets.
  • Robustness across various real-world conditions, from lighting to perceived age and gender.

Furthermore, combining the two Wake Vision training sets, using the larger set for pre-training and the quality set for fine-tuning, yields the best results, highlighting the value of both datasets when used in sophisticated training pipelines.

Wake Vision Leaderboard: Track and Submit New Top-Performing Models

The Wake Vision website features a Leaderboard, providing a dedicated platform to assess and compare the performance of models trained on the Wake Vision dataset.

The leaderboard enables a clear and detailed view of how models perform under various conditions, with performance metrics like accuracy, error rates, and robustness across diverse real-world scenarios. It’s an excellent resource for both seasoned researchers and newcomers looking to improve and validate their approaches.

Explore the leaderboard to see the current rankings, learn from high-performing models, and submit your own to contribute to advancing the state of the art in TinyML person detection.

Making Wake Vision Easy to Access

Wake Vision is available through popular dataset services such as:

With its permissive license (CC-BY 4.0), researchers and practitioners can freely use and adapt Wake Vision for their TinyML projects.

Get Started with Wake Vision Today!

The Wake Vision team has made the dataset, code, and benchmarks publicly available to accelerate TinyML research and enable the development of better, more reliable person detection models for ultra-low-power devices.

To learn more and access the dataset, visit Wake Vision’s website, where you can also check out a leaderboard of top-performing models on the Wake Vision dataset - and see if you can create better performing models!

Leer más

MLSysBook.AI: Principles and Practices of Machine Learning Systems Engineering

TensorFlow Blog 2024-11-19T09:00:00.000-08:00
MLSysBook.AI: Principles and Practices of Machine Learning Systems Engineering
Imagen 1Imagen 2
Posted by Jason Jabbour, Kai Kleinbard and Vijay Janapa Reddi (Harvard University)

Everyone wants to do the modeling work, but no one wants to do the engineering.

If ML developers are like astronauts exploring new frontiers, ML systems engineers are the rocket scientists designing and building the engines that take them there.

Introduction

"Everyone wants to do modeling, but no one wants to do the engineering," highlights a stark reality in the machine learning (ML) world: the allure of building sophisticated models often overshadows the critical task of engineering them into robust, scalable, and efficient systems.

The reality is that ML and systems are inextricably linked. Models, no matter how innovative, are computationally demanding and require substantial resources—with the rise of generative AI and increasingly complex models, understanding how ML infrastructure scales becomes even more critical. Ignoring the system's limitations during model development is a recipe for disaster.

Unfortunately, educational resources on the systems side of machine learning are lacking. There are plenty of textbooks and materials on deep learning theory and concepts. However, we truly need more resources on the infrastructure and systems side of machine learning. Critical questions—such as how to optimize models for specific hardware, deploy them at scale, and ensure system efficiency and reliability—are still not adequately understood by ML practitioners. This lack of understanding is not due to disinterest but rather a gap in available knowledge.

One significant resource addressing this gap is MLSysBook.ai. This blog post explores key ML systems engineering concepts from MLSysBook.ai and maps them to the TensorFlow ecosystem to provide practical insights for building efficient ML systems.

The Connection Between Machine Learning and Systems

Many think machine learning is solely about extracting patterns and insights from data. While this is fundamental, it’s only part of the story. Training and deploying these "deep" neural network models often necessitates vast computational resources, from powerful GPUs and TPUs to massive datasets and distributed computing clusters.

Consider the recent wave of large language models (LLMs) that have pushed the boundaries of natural language processing. These models highlight the immense computational challenges in training and deploying large-scale machine learning models. Without carefully considering the underlying system, training times can stretch from days to weeks, inference can become sluggish, and deployment costs can skyrocket.

Building a successful machine-learning solution involves the entire system, not just the model. This is where ML systems engineering takes the reins, allowing you to optimize model architecture, hardware selection, and deployment strategies, ensuring that your models are not only powerful in theory but also efficient and scalable.

To draw an analogy, if developing algorithms is like being an astronaut exploring the vast unknown of space, then ML systems engineering is similar to the work of rocket scientists building the engines that make those journeys possible. Without the precise engineering of rocket scientists, even the most adventurous astronauts would remain earthbound.

An abstract circular design resembling a network or neural pathways consisting of interconnected nodes and lines in shades of blue, pink, and gray, against a white background

Bridging the Gap: MLSysBook.ai and System-Level Thinking

One important new resource this blog post offers for insights into ML systems engineering is an open-source "textbook" — MLSysBook.ai —developed initially as part of Harvard University's CS249r Tiny Machine Learning course and HarvardX's TinyML online series. This project, which has expanded into an open, collaborative initiative, dives deep into the end-to-end ML lifecycle.

It highlights that the principles governing ML systems, whether designed for tiny embedded devices or large data centers, are fundamentally similar. For instance, while tiny machines might employ INT8 for numeric operations to save resources, larger systems often utilize FP16 for higher precision—the fundamental concepts, such as quantization, span across both scenarios.

Key concepts covered in this resource include:

    1. Data Engineering: Setting the foundation by efficiently collecting, preprocessing, and managing data to prepare it for the machine learning pipeline.
    2. Model Development: Crafting and refining machine learning models to meet specific tasks and performance goals.
    3. Optimization: Fine-tuning model performance and efficiency, ensuring effective use of hardware and resources within the system.
    4. Deployment: Transitioning models from development to real-world production environments while scaling and adapting them to existing infrastructure.
    5. Monitoring and Maintenance: Continuously tracking system health and performance to maintain reliability, address issues, and adapt to evolving data and requirements.

In an efficient ML system, data engineering lays the groundwork by preparing and organizing raw data, which is essential for any machine learning process. This ensures data can be transformed into actionable insights during model development, where machine learning models are created and refined for specific tasks. Following development, optimization becomes critical for enhancing model performance and efficiency, ensuring that models are tuned to run effectively on the designated hardware and within the system's constraints.

The seamless integration of these steps then extends into the deployment phase, where models are brought into real-world production environments. Here, they must be scaled and adapted to function effectively within existing infrastructure, highlighting the importance of robust ML systems engineering. However, the lifecycle of an ML system continues after deployment; continuous monitoring and maintenance are vital. This ongoing process ensures that ML systems remain healthy, reliable and perform optimally over time, adapting to new data and requirements as they arise.

A flowchart diagrams the dependencies between different machine learning concepts, tools, and systems.  Beige boxes represent concepts like 'Data Engineering' and tools like 'TensorFlow Data', while blue boxes indicate higher-level systems like 'ML Systems Engineering Principles' and 'Efficient ML Systems'.  Arrows and dotted lines illustrate the relationships and workflow between these elements.
A mapping of MLSysBook.AI's core ML systems engineering concepts to the TensorFlow ecosystem, illustrating how specific TensorFlow tools support each stage of the machine learning lifecycle, ultimately contributing to the creation of efficient ML systems.

SocratiQ: An Interactive AI-Powered Generative Learning Assistant

One of the exciting innovations we’ve integrated into MLSysBook.ai is SocratiQ—an AI-powered learning assistant designed to foster a deeper and more engaging connection with content focused on machine learning systems. By leveraging a Large Language Model (LLM), SocratiQ turns learning into a dynamic, interactive experience that allows students and practitioners to engage with and co-create their educational journey actively.

With SocratiQ, readers transition from passive content consumption to an active, personalized learning experience. Here’s how SocratiQ makes this possible:

  • Interactive Quizzes: SocratiQ enhances the learning process by automatically generating quizzes based on the reading content. This feature encourages active reflection and reinforces understanding without disrupting the learning flow. Learners can test their comprehension of complex ML systems concepts.
  • moving image of an interactive quiz in SocratiQ
  • Adaptive, In-Content Learning: SocratiQ offers real-time conversations with the LLM without pulling learners away from the content they're engaging with. Acting as a personalized Teaching Assistant (TA), it provides tailored explanations.
  • moving image of an real-time conversation with the LLM in SocratiQ
  • Progress Assessment and Gamification: Learners’ progress is tracked and stored locally in their browser, providing a personalized path to developing skills without privacy concerns. This allows for evolving engagement as the learner progresses through the material.
  • A Quiz Performance Dashboard in SocratiQ

SocratiQ strives to be a supportive guide that respects the primacy of the content itself. It subtly integrates into the learning flow, stepping in when needed to provide guidance, quizzes, or explanations—then stepping back to let the reader continue undistracted. This design ensures that SocratiQ works harmoniously within the natural reading experience, offering support and personalization while keeping the learner immersed in the content.

We plan to integrate capabilities such as research lookups and case studies. The aim is to create a unique learning environment where readers can study and actively engage with the material. This blend of content and AI-driven assistance transforms MLSysBook.ai into a living educational resource that grows alongside the learner's understanding.

Mapping MLSysBook.ai's Concepts to the TensorFlow Ecosystem

MLSysBook.AI focuses on the core concepts in ML system engineering while providing strategic tie-ins to the TensorFlow ecosystem. The TensorFlow ecosystem offers a rich environment for realizing many of the principles discussed in MLSysBook.AI. This makes the TensorFlow ecosystem a perfect match for the key ML systems concepts covered in MLSysBook.AI, with each tool supporting a specific stage of the machine learning process:

  • TensorFlow Data (Data Engineering): Supports efficient data preprocessing and input pipelines.
  • TensorFlow Core (Model Development): Central to model creation and training.
  • TensorFlow Lite (Optimization): Enables model optimization for various deployment scenarios, especially critical for edge devices.
  • TensorFlow Serving (Deployment): Facilitates smooth model deployment in production environments.
  • TensorFlow Extended (Monitoring and maintenance): Offers comprehensive tools for ongoing system health and performance.

Note that MLSysBook.AI does not explicitly teach or focus on TensorFlow-specific concepts or implementations. The book's primary goal is to explore fundamental ML system engineering principles. The connections drawn in this blog post to the TensorFlow ecosystem are simply intended to illustrate how these core concepts align with tools and practices used by industry practitioners, providing a bridge between theoretical understanding and real-world application.

Support ML Systems Education: Every Star Counts 🌟

If you find this blog post valuable and want to improve ML systems engineering education, please consider giving the MLSysBook.ai GitHub repository a star ⭐.

Thanks to our sponsors, each ⭐ added to the MLSysBook.ai GitHub repository translates to donations supporting students and minorities globally by funding their research scholarships, empowering them to drive innovation in machine learning systems research worldwide.

Every star counts—help us reach the generous funding cap!

Conclusion

The gap between ML modeling and system engineering is closing, and understanding both aspects is important for creating impactful AI solutions. By embracing ML system engineering principles and leveraging powerful tools like those in the TensorFlow ecosystem, we can go beyond building models to creating complete, optimized, and scalable ML systems.

As AI continues to evolve, the demand for professionals who can bridge the gap between ML algorithms and systems implementation will only grow. Whether you're a seasoned practitioner or just starting your ML journey, investing time in understanding ML systems engineering will undoubtedly pay dividends in your career and the impact of your work. If you’d like to learn more, listen to our MLSysBook.AI podcast, generated by Google’s NotebookLM.

Remember, even the most brilliant astronauts need skilled engineers to build their rockets!

Acknowledgments

We thank Josh Gordon for his suggestion to write this blog post and for encouraging and sharing ideas on how the book could be a useful resource for the TensorFlow community.

Leer más

What's new in TensorFlow 2.18

TensorFlow Blog 2024-10-28T12:00:00.000-07:00
What's new in TensorFlow 2.18
Imagen 1Imagen 2
Posted by the TensorFlow team

TensorFlow 2.18 has been released! Highlights of this release (and 2.17) include NumPy 2.0, LiteRT repository, CUDA Update, Hermetic CUDA and more. For the full release notes, please click here.

Note: Release updates on the new multi-backend Keras will be published on keras.io, starting with Keras 3.0. For more information, please see https://keras.io/keras_3/.

TensorFlow Core

NumPy 2.0

The upcoming TensorFlow 2.18 release will include support for NumPy 2.0. While the majority of TensorFlow APIs will function seamlessly with NumPy 2.0, this may break some edge cases of usage, e.g., out-of-boundary conversion errors and numpy scalar representation errors. You can consult the following common solutions.

Note that NumPy's type promotion rules have been changed (See NEP 50 for details). This may change the precision at which computations happen, leading either to type errors or to numerical changes to results. Please see the NumPy 2 migration guide.

We've updated some TensorFlow tensor APIs to maintain compatibility with NumPy 2.0 while preserving the out-of-boundary conversion behavior in NumPy 1.x.


LiteRT Repository

We're making some changes to how LiteRT (formerly known as TFLite) is developed. Over the coming months, we'll be gradually transitioning TFLite's codebase to LiteRT. Once the migration is complete, we'll start accepting contributions directly through the LiteRT repository. There will no longer be any binary TFLite releases and developers should switch to LiteRT for the latest updates.


Hermetic CUDA

If you build TensorFlow from source, Bazel will now download specific versions of CUDA, CUDNN and NCCL distributions, and then use those tools as dependencies in various Bazel targets. This enables more reproducible builds for Google ML projects and supported CUDA versions because the build no longer relies on the locally installed versions. More details are provided here.


CUDA Update

TensorFlow binary distributions now ship with dedicated CUDA kernels for GPUs with a compute capability of 8.9. This improves the performance on the popular Ada-Generation GPUs like NVIDIA RTX 40**, L4 and L40.

To keep Python wheel sizes in check, we made the decision to no longer ship CUDA kernels for compute capability 5.0. That means the oldest NVIDIA GPU generation supported by the precompiled Python packages is now the Pascal generation (compute capability 6.0). For Maxwell support, we either recommend sticking with TensorFlow version 2.16, or compiling TensorFlow from source. The latter will be possible as long as the used CUDA version still supports Maxwell GPUs.

Leer más

What's new in TensorFlow 2.17

TensorFlow Blog 2024-07-18T09:00:00.000-07:00
What's new in TensorFlow 2.17
Imagen 1Imagen 2
Posted by the TensorFlow team

TensorFlow 2.17 has been released! Highlights of this release (and 2.16) include CUDA update, upcoming Numpy 2.0, and more. For the full release notes, please click here.

Note: Release updates on the new multi-backend Keras will be published on keras.io, starting with Keras 3.0. For more information, please see https://keras.io/keras_3/.

TensorFlow Core

CUDA Update

TensorFlow binary distributions now ship with dedicated CUDA kernels for GPUs with a compute capability of 8.9. This improves the performance on the popular Ada-Generation GPUs like NVIDIA RTX 40**, L4 and L40.

To keep Python wheel sizes in check, we made the decision to no longer ship CUDA kernels for compute capability 5.0. That means the oldest NVIDIA GPU generation supported by the precompiled Python packages is now the Pascal generation (compute capability 6.0). For Maxwell support, we either recommend sticking with TensorFlow version 2.16, or compiling TensorFlow from source. The latter will be possible as long as the used CUDA version still supports Maxwell GPUs.

Numpy 2.0

Upcoming TensorFlow 2.18 release will include support for Numpy 2.0. This may break some edge cases of TensorFlow API usage.

Drop TensorRT support

Starting with TensorFlow 2.18, support for TensorRT will be dropped. TensorFlow 2.17 will be the last version to include it.

Leer más

Recetas de comida

Galletas y pastas con pistola. El dulce perfecto para los más pequeños

Recetas de Rechupete Sat, 24 May 2025 07:22:56 +0000
Galletas y pastas con pistola. El dulce perfecto para los más pequeños
Imagen 1
Hacer galletas en mi casa puede ser un pasatiempo. El trabajo, amasar y dar forma es un ritual divertido y nos permite pasar ratos entretenidos con los niños. Las galletas que hoy os traigo se van a convertir en la preferidas de los peques, poder hacer un montón de formas diferentes con sólo un disparo […]
Leer más

Merluza en salsa verde, con la tradicional receta vasca

Recetas de Rechupete Sat, 24 May 2025 07:22:44 +0000
Merluza en salsa verde, con la tradicional receta vasca
Imagen 1
Si tuviera que escoger una de las recetas de pescado y marisco de la gastronomía vasca no sabría elegir entre una buena merluza a la vasca, un guiso de merluza, merluza a la Koskera o la que os presento hoy, merluza en salsa verde. Aunque otras recetas como la merluza a la marinera, la merluza […]
Leer más

Cómo preparar churros caseros

Recetas de Rechupete Sat, 24 May 2025 07:22:31 +0000
Cómo preparar churros caseros
Imagen 1
¿Quieres aprender cómo preparar churros caseros? No existe un rincón de España donde no se hagan los tradicionales churros o porras. De Norte a Sur de la península los churros son un recurso siempre bienvenido, para un desayuno o una merienda acompañados de un buen chocolate caliente o, simplemente un café con leche. Este dulce, […]
Leer más

Yemas de Caravaca con varias coberturas

Recetas de Rechupete Fri, 23 May 2025 06:36:33 +0000
Yemas de Caravaca con varias coberturas
Imagen 1
Las yemas de Caravaca son un exquisito y delicado bocado que, una vez pruebas, es imposible olvidar. Típicas de la localidad murciana de Caravaca de la Cruz, se encuentran en los escaparates de todas sus pastelerías. Desde las básicas, rebozadas en azúcar glas, hasta las más golosas que se recubren con caramelo, con chocolate o […]
Leer más

Michirones murcianos. Un guiso tradicional de habas secas

Recetas de Rechupete Fri, 23 May 2025 06:36:20 +0000
Michirones murcianos. Un guiso tradicional de habas secas
Imagen 1
Si hay un plato que huele a Murcia en cada cucharada, ese es sin duda los michirones murcianos. Este guiso contundente, a base de habas secas, es todo un clásico de la gastronomía murciana y se disfruta especialmente en los meses más fríos, aunque en algunos bares y restaurantes se sirve durante todo el año […]
Leer más

Noticias

Juez dictamina que el Gobierno de Trump debe repatriar a solicitante de asilo de Guatemala deportado injustamente - CNN en Español

Google News Sat, 24 May 2025 09:27:00 GMT
Juez dictamina que el Gobierno de Trump debe repatriar a solicitante de asilo de Guatemala deportado injustamente - CNN en Español Leer más

Trump retoma su guerra comercial y amenaza a Europa y a Apple - The New York Times

Google News Fri, 23 May 2025 20:43:34 GMT
Trump retoma su guerra comercial y amenaza a Europa y a Apple - The New York Times Leer más

Rusia lanzó un ataque combinado con drones y misiles sobre Kiev: hay al menos 15 heridos - Infobae

Google News Sat, 24 May 2025 00:13:52 GMT
Rusia lanzó un ataque combinado con drones y misiles sobre Kiev: hay al menos 15 heridos - Infobae Leer más

Pediatra que trabaja en hospital de Gaza recibe los cuerpos de sus nueve hijos, muertos en ataque del ejército de Israel - ELTIEMPO.COM

Google News Sat, 24 May 2025 17:50:31 GMT
Pediatra que trabaja en hospital de Gaza recibe los cuerpos de sus nueve hijos, muertos en ataque del ejército de Israel - ELTIEMPO.COM Leer más

Alumnos internacionales de Harvard reaccionan al anuncio del gobierno de Trump - The New York Times

Google News Fri, 23 May 2025 17:49:40 GMT
Alumnos internacionales de Harvard reaccionan al anuncio del gobierno de Trump - The New York Times Leer más

Ciencia

What's new in TensorFlow 2.19

TensorFlow Blog 2025-03-13T09:00:00.000-07:00
What's new in TensorFlow 2.19
Imagen 1Imagen 2
Posted by the TensorFlow team

TensorFlow 2.19 has been released! Highlights of this release include changes to the C++ API in LiteRT, bfloat16 support for tflite casting, discontinue of releasing libtensorflow packages. Learn more by reading the full release notes.

Note: Release updates on the new multi-backend Keras will be published on keras.io, starting with Keras 3.0. For more information, please see https://keras.io/keras_3/.

TensorFlow Core

LiteRT

The public constants tflite::Interpreter:kTensorsReservedCapacity and tflite::Interpreter:kTensorsCapacityHeadroom are now const references, rather than constexpr compile-time constants. (This is to enable better API compatibility for TFLite in Play services while preserving the implementation flexibility to change the values of these constants in the future.)

TF-Lite

tfl.Cast op is now supporting bfloat16 in the runtime kernel. tf.lite.Interpreter gives a deprecation warning redirecting to its new location at ai_edge_litert.interpreter, as the API tf.lite.Interpreter will be deleted in TF 2.20. See the migration guide for details.

Libtensorflow

We have stopped publishing libtensorflow packages but it can still be unpacked from the PyPI package.

Leer más

Introducing Wake Vision: A High-Quality, Large-Scale Dataset for TinyML Computer Vision Applications

TensorFlow Blog 2024-12-05T09:00:00.000-08:00
Introducing Wake Vision: A High-Quality, Large-Scale Dataset for TinyML Computer Vision Applications
Imagen 1Imagen 2
Posted by Colby Banbury, Emil Njor, Andrea Mattia Garavagno, Vijay Janapa Reddi – Harvard University

TinyML is an exciting frontier in machine learning, enabling models to run on extremely low-power devices such as microcontrollers and edge devices. However, the growth of this field has been stifled by a lack of tailored large and high-quality datasets. That's where Wake Vision comes in—a new dataset designed to accelerate research and development in TinyML.

A vibrant, abstract representation of a human figure is formed by swirling lines and dots of rainbow colors. A large, bright blue eye is centrally located on the figure's torso.

Why TinyML Needs Better Data

The development of TinyML requires compact and efficient models, often only a few hundred kilobytes in size. The applications targeted by standard machine learning datasets, like ImageNet, are not well-suited for these highly constrained models.

Existing datasets for TinyML, like Visual Wake Words (VWW), have laid the groundwork for progress in the field. However, their smaller size and inherent limitations pose challenges for training production-grade models. Wake Vision builds upon this foundation by providing a large, diverse, and high-quality dataset specifically tailored for person detection—the cornerstone vision task for TinyML.

What Makes Wake Vision Different?

A table displaying the number of images used for training, validation, and testing different datasets, including Wake Vision, Visual Wake Words, CIFAR-100, and PASCAL VOC 2012. The table shows the total number of images and the number of person images in each dataset split.

Wake Vision is a new, large-scale dataset with roughly 6 million images, almost 100 times larger than VWW, the previous state-of-the-art dataset for person detection in TinyML. The dataset provides two distinct training sets:

  • Wake Vision (Large): Prioritizes dataset size.
  • Wake Vision (Quality): Prioritizes label quality.

Wake Vision's comprehensive filtering and labeling process significantly enhances the dataset's quality.

Why Data Quality Matters for TinyML Models

In traditional overparameterized models, it is widely believed that data quantity matters more than data quality, as an overparameterized model can adapt to errors in the training data. But according to the image below, TinyML tells a different story:

Five line graphs illustrate the Wake Vision Test Score with varying percentages of training data quality used, comparing models by parameter count (78K, 309K, 1.2M, 4.9M, and 11M) and  error rate (7%, 15%, and 30%).

The figure above shows that high-quality labels (less error) are more beneficial for under-parameterized models than simply having more data. Larger, error-prone datasets can still be valuable when paired with fine-grained techniques.

By providing two versions of the training set, Wake Vision enables researchers to explore the balance between dataset size and quality effectively.

Real-World Testing: Wake Vision's Fine-Grained Benchmarks

Five images are shown, each with text underneath describing the content as Perceived Older Person, Near Person, Bright Image, Perceived Female Person, and Depicted Person.

Unlike many open-source datasets, Wake Vision offers fine-grained benchmarks and detailed tests for real-world applications like those shown in the above figure. These enable the evaluation of model performance in real-world scenarios, such as:

  • Distance: How well the model detects people at various distances from the camera.
  • Lighting Conditions: Performance in well-lit vs. poorly-lit environments.
  • Depictions: Handling of varied representations of people (e.g., drawings, sculptures).
  • Perceived Gender and Age: Detecting biases across genders and age groups.

These benchmarks give researchers a nuanced understanding of model performance in specific, real-world contexts and help identify potential biases and limitations early in the design phase.

Key Performance Gains With Wake Vision

The performance gains achieved using Wake Vision are impressive:

  • Up to a 6.6% increase in accuracy over the established VWW dataset.
  • Error rate reduction from 7.8% to 2.2% with manual label validation on evaluation sets.
  • Robustness across various real-world conditions, from lighting to perceived age and gender.

Furthermore, combining the two Wake Vision training sets, using the larger set for pre-training and the quality set for fine-tuning, yields the best results, highlighting the value of both datasets when used in sophisticated training pipelines.

Wake Vision Leaderboard: Track and Submit New Top-Performing Models

The Wake Vision website features a Leaderboard, providing a dedicated platform to assess and compare the performance of models trained on the Wake Vision dataset.

The leaderboard enables a clear and detailed view of how models perform under various conditions, with performance metrics like accuracy, error rates, and robustness across diverse real-world scenarios. It’s an excellent resource for both seasoned researchers and newcomers looking to improve and validate their approaches.

Explore the leaderboard to see the current rankings, learn from high-performing models, and submit your own to contribute to advancing the state of the art in TinyML person detection.

Making Wake Vision Easy to Access

Wake Vision is available through popular dataset services such as:

With its permissive license (CC-BY 4.0), researchers and practitioners can freely use and adapt Wake Vision for their TinyML projects.

Get Started with Wake Vision Today!

The Wake Vision team has made the dataset, code, and benchmarks publicly available to accelerate TinyML research and enable the development of better, more reliable person detection models for ultra-low-power devices.

To learn more and access the dataset, visit Wake Vision’s website, where you can also check out a leaderboard of top-performing models on the Wake Vision dataset - and see if you can create better performing models!

Leer más

MLSysBook.AI: Principles and Practices of Machine Learning Systems Engineering

TensorFlow Blog 2024-11-19T09:00:00.000-08:00
MLSysBook.AI: Principles and Practices of Machine Learning Systems Engineering
Imagen 1Imagen 2
Posted by Jason Jabbour, Kai Kleinbard and Vijay Janapa Reddi (Harvard University)

Everyone wants to do the modeling work, but no one wants to do the engineering.

If ML developers are like astronauts exploring new frontiers, ML systems engineers are the rocket scientists designing and building the engines that take them there.

Introduction

"Everyone wants to do modeling, but no one wants to do the engineering," highlights a stark reality in the machine learning (ML) world: the allure of building sophisticated models often overshadows the critical task of engineering them into robust, scalable, and efficient systems.

The reality is that ML and systems are inextricably linked. Models, no matter how innovative, are computationally demanding and require substantial resources—with the rise of generative AI and increasingly complex models, understanding how ML infrastructure scales becomes even more critical. Ignoring the system's limitations during model development is a recipe for disaster.

Unfortunately, educational resources on the systems side of machine learning are lacking. There are plenty of textbooks and materials on deep learning theory and concepts. However, we truly need more resources on the infrastructure and systems side of machine learning. Critical questions—such as how to optimize models for specific hardware, deploy them at scale, and ensure system efficiency and reliability—are still not adequately understood by ML practitioners. This lack of understanding is not due to disinterest but rather a gap in available knowledge.

One significant resource addressing this gap is MLSysBook.ai. This blog post explores key ML systems engineering concepts from MLSysBook.ai and maps them to the TensorFlow ecosystem to provide practical insights for building efficient ML systems.

The Connection Between Machine Learning and Systems

Many think machine learning is solely about extracting patterns and insights from data. While this is fundamental, it’s only part of the story. Training and deploying these "deep" neural network models often necessitates vast computational resources, from powerful GPUs and TPUs to massive datasets and distributed computing clusters.

Consider the recent wave of large language models (LLMs) that have pushed the boundaries of natural language processing. These models highlight the immense computational challenges in training and deploying large-scale machine learning models. Without carefully considering the underlying system, training times can stretch from days to weeks, inference can become sluggish, and deployment costs can skyrocket.

Building a successful machine-learning solution involves the entire system, not just the model. This is where ML systems engineering takes the reins, allowing you to optimize model architecture, hardware selection, and deployment strategies, ensuring that your models are not only powerful in theory but also efficient and scalable.

To draw an analogy, if developing algorithms is like being an astronaut exploring the vast unknown of space, then ML systems engineering is similar to the work of rocket scientists building the engines that make those journeys possible. Without the precise engineering of rocket scientists, even the most adventurous astronauts would remain earthbound.

An abstract circular design resembling a network or neural pathways consisting of interconnected nodes and lines in shades of blue, pink, and gray, against a white background

Bridging the Gap: MLSysBook.ai and System-Level Thinking

One important new resource this blog post offers for insights into ML systems engineering is an open-source "textbook" — MLSysBook.ai —developed initially as part of Harvard University's CS249r Tiny Machine Learning course and HarvardX's TinyML online series. This project, which has expanded into an open, collaborative initiative, dives deep into the end-to-end ML lifecycle.

It highlights that the principles governing ML systems, whether designed for tiny embedded devices or large data centers, are fundamentally similar. For instance, while tiny machines might employ INT8 for numeric operations to save resources, larger systems often utilize FP16 for higher precision—the fundamental concepts, such as quantization, span across both scenarios.

Key concepts covered in this resource include:

    1. Data Engineering: Setting the foundation by efficiently collecting, preprocessing, and managing data to prepare it for the machine learning pipeline.
    2. Model Development: Crafting and refining machine learning models to meet specific tasks and performance goals.
    3. Optimization: Fine-tuning model performance and efficiency, ensuring effective use of hardware and resources within the system.
    4. Deployment: Transitioning models from development to real-world production environments while scaling and adapting them to existing infrastructure.
    5. Monitoring and Maintenance: Continuously tracking system health and performance to maintain reliability, address issues, and adapt to evolving data and requirements.

In an efficient ML system, data engineering lays the groundwork by preparing and organizing raw data, which is essential for any machine learning process. This ensures data can be transformed into actionable insights during model development, where machine learning models are created and refined for specific tasks. Following development, optimization becomes critical for enhancing model performance and efficiency, ensuring that models are tuned to run effectively on the designated hardware and within the system's constraints.

The seamless integration of these steps then extends into the deployment phase, where models are brought into real-world production environments. Here, they must be scaled and adapted to function effectively within existing infrastructure, highlighting the importance of robust ML systems engineering. However, the lifecycle of an ML system continues after deployment; continuous monitoring and maintenance are vital. This ongoing process ensures that ML systems remain healthy, reliable and perform optimally over time, adapting to new data and requirements as they arise.

A flowchart diagrams the dependencies between different machine learning concepts, tools, and systems.  Beige boxes represent concepts like 'Data Engineering' and tools like 'TensorFlow Data', while blue boxes indicate higher-level systems like 'ML Systems Engineering Principles' and 'Efficient ML Systems'.  Arrows and dotted lines illustrate the relationships and workflow between these elements.
A mapping of MLSysBook.AI's core ML systems engineering concepts to the TensorFlow ecosystem, illustrating how specific TensorFlow tools support each stage of the machine learning lifecycle, ultimately contributing to the creation of efficient ML systems.

SocratiQ: An Interactive AI-Powered Generative Learning Assistant

One of the exciting innovations we’ve integrated into MLSysBook.ai is SocratiQ—an AI-powered learning assistant designed to foster a deeper and more engaging connection with content focused on machine learning systems. By leveraging a Large Language Model (LLM), SocratiQ turns learning into a dynamic, interactive experience that allows students and practitioners to engage with and co-create their educational journey actively.

With SocratiQ, readers transition from passive content consumption to an active, personalized learning experience. Here’s how SocratiQ makes this possible:

  • Interactive Quizzes: SocratiQ enhances the learning process by automatically generating quizzes based on the reading content. This feature encourages active reflection and reinforces understanding without disrupting the learning flow. Learners can test their comprehension of complex ML systems concepts.
  • moving image of an interactive quiz in SocratiQ
  • Adaptive, In-Content Learning: SocratiQ offers real-time conversations with the LLM without pulling learners away from the content they're engaging with. Acting as a personalized Teaching Assistant (TA), it provides tailored explanations.
  • moving image of an real-time conversation with the LLM in SocratiQ
  • Progress Assessment and Gamification: Learners’ progress is tracked and stored locally in their browser, providing a personalized path to developing skills without privacy concerns. This allows for evolving engagement as the learner progresses through the material.
  • A Quiz Performance Dashboard in SocratiQ

SocratiQ strives to be a supportive guide that respects the primacy of the content itself. It subtly integrates into the learning flow, stepping in when needed to provide guidance, quizzes, or explanations—then stepping back to let the reader continue undistracted. This design ensures that SocratiQ works harmoniously within the natural reading experience, offering support and personalization while keeping the learner immersed in the content.

We plan to integrate capabilities such as research lookups and case studies. The aim is to create a unique learning environment where readers can study and actively engage with the material. This blend of content and AI-driven assistance transforms MLSysBook.ai into a living educational resource that grows alongside the learner's understanding.

Mapping MLSysBook.ai's Concepts to the TensorFlow Ecosystem

MLSysBook.AI focuses on the core concepts in ML system engineering while providing strategic tie-ins to the TensorFlow ecosystem. The TensorFlow ecosystem offers a rich environment for realizing many of the principles discussed in MLSysBook.AI. This makes the TensorFlow ecosystem a perfect match for the key ML systems concepts covered in MLSysBook.AI, with each tool supporting a specific stage of the machine learning process:

  • TensorFlow Data (Data Engineering): Supports efficient data preprocessing and input pipelines.
  • TensorFlow Core (Model Development): Central to model creation and training.
  • TensorFlow Lite (Optimization): Enables model optimization for various deployment scenarios, especially critical for edge devices.
  • TensorFlow Serving (Deployment): Facilitates smooth model deployment in production environments.
  • TensorFlow Extended (Monitoring and maintenance): Offers comprehensive tools for ongoing system health and performance.

Note that MLSysBook.AI does not explicitly teach or focus on TensorFlow-specific concepts or implementations. The book's primary goal is to explore fundamental ML system engineering principles. The connections drawn in this blog post to the TensorFlow ecosystem are simply intended to illustrate how these core concepts align with tools and practices used by industry practitioners, providing a bridge between theoretical understanding and real-world application.

Support ML Systems Education: Every Star Counts 🌟

If you find this blog post valuable and want to improve ML systems engineering education, please consider giving the MLSysBook.ai GitHub repository a star ⭐.

Thanks to our sponsors, each ⭐ added to the MLSysBook.ai GitHub repository translates to donations supporting students and minorities globally by funding their research scholarships, empowering them to drive innovation in machine learning systems research worldwide.

Every star counts—help us reach the generous funding cap!

Conclusion

The gap between ML modeling and system engineering is closing, and understanding both aspects is important for creating impactful AI solutions. By embracing ML system engineering principles and leveraging powerful tools like those in the TensorFlow ecosystem, we can go beyond building models to creating complete, optimized, and scalable ML systems.

As AI continues to evolve, the demand for professionals who can bridge the gap between ML algorithms and systems implementation will only grow. Whether you're a seasoned practitioner or just starting your ML journey, investing time in understanding ML systems engineering will undoubtedly pay dividends in your career and the impact of your work. If you’d like to learn more, listen to our MLSysBook.AI podcast, generated by Google’s NotebookLM.

Remember, even the most brilliant astronauts need skilled engineers to build their rockets!

Acknowledgments

We thank Josh Gordon for his suggestion to write this blog post and for encouraging and sharing ideas on how the book could be a useful resource for the TensorFlow community.

Leer más

What's new in TensorFlow 2.18

TensorFlow Blog 2024-10-28T12:00:00.000-07:00
What's new in TensorFlow 2.18
Imagen 1Imagen 2
Posted by the TensorFlow team

TensorFlow 2.18 has been released! Highlights of this release (and 2.17) include NumPy 2.0, LiteRT repository, CUDA Update, Hermetic CUDA and more. For the full release notes, please click here.

Note: Release updates on the new multi-backend Keras will be published on keras.io, starting with Keras 3.0. For more information, please see https://keras.io/keras_3/.

TensorFlow Core

NumPy 2.0

The upcoming TensorFlow 2.18 release will include support for NumPy 2.0. While the majority of TensorFlow APIs will function seamlessly with NumPy 2.0, this may break some edge cases of usage, e.g., out-of-boundary conversion errors and numpy scalar representation errors. You can consult the following common solutions.

Note that NumPy's type promotion rules have been changed (See NEP 50 for details). This may change the precision at which computations happen, leading either to type errors or to numerical changes to results. Please see the NumPy 2 migration guide.

We've updated some TensorFlow tensor APIs to maintain compatibility with NumPy 2.0 while preserving the out-of-boundary conversion behavior in NumPy 1.x.


LiteRT Repository

We're making some changes to how LiteRT (formerly known as TFLite) is developed. Over the coming months, we'll be gradually transitioning TFLite's codebase to LiteRT. Once the migration is complete, we'll start accepting contributions directly through the LiteRT repository. There will no longer be any binary TFLite releases and developers should switch to LiteRT for the latest updates.


Hermetic CUDA

If you build TensorFlow from source, Bazel will now download specific versions of CUDA, CUDNN and NCCL distributions, and then use those tools as dependencies in various Bazel targets. This enables more reproducible builds for Google ML projects and supported CUDA versions because the build no longer relies on the locally installed versions. More details are provided here.


CUDA Update

TensorFlow binary distributions now ship with dedicated CUDA kernels for GPUs with a compute capability of 8.9. This improves the performance on the popular Ada-Generation GPUs like NVIDIA RTX 40**, L4 and L40.

To keep Python wheel sizes in check, we made the decision to no longer ship CUDA kernels for compute capability 5.0. That means the oldest NVIDIA GPU generation supported by the precompiled Python packages is now the Pascal generation (compute capability 6.0). For Maxwell support, we either recommend sticking with TensorFlow version 2.16, or compiling TensorFlow from source. The latter will be possible as long as the used CUDA version still supports Maxwell GPUs.

Leer más

What's new in TensorFlow 2.17

TensorFlow Blog 2024-07-18T09:00:00.000-07:00
What's new in TensorFlow 2.17
Imagen 1Imagen 2
Posted by the TensorFlow team

TensorFlow 2.17 has been released! Highlights of this release (and 2.16) include CUDA update, upcoming Numpy 2.0, and more. For the full release notes, please click here.

Note: Release updates on the new multi-backend Keras will be published on keras.io, starting with Keras 3.0. For more information, please see https://keras.io/keras_3/.

TensorFlow Core

CUDA Update

TensorFlow binary distributions now ship with dedicated CUDA kernels for GPUs with a compute capability of 8.9. This improves the performance on the popular Ada-Generation GPUs like NVIDIA RTX 40**, L4 and L40.

To keep Python wheel sizes in check, we made the decision to no longer ship CUDA kernels for compute capability 5.0. That means the oldest NVIDIA GPU generation supported by the precompiled Python packages is now the Pascal generation (compute capability 6.0). For Maxwell support, we either recommend sticking with TensorFlow version 2.16, or compiling TensorFlow from source. The latter will be possible as long as the used CUDA version still supports Maxwell GPUs.

Numpy 2.0

Upcoming TensorFlow 2.18 release will include support for Numpy 2.0. This may break some edge cases of TensorFlow API usage.

Drop TensorRT support

Starting with TensorFlow 2.18, support for TensorRT will be dropped. TensorFlow 2.17 will be the last version to include it.

Leer más

Gadgets

The Nintendo Switch 2 sure seems to work just fine with a USB mouse

The Verge 2025-05-24T17:51:55-04:00
The Nintendo Switch 2 sure seems to work just fine with a USB mouse
You’ll be able to use a USB mouse with the Nintendo Switch 2 in at least one game, as a Koei Tecmo developer commentary video for the upcoming Nobunaga’s Ambition: Awakening Complete Edition revealed this week. That’s great news if your wrists, like mine, started preemptively cramping the first time you saw video of someone […]
Leer más

The oldest Fire TV devices are losing Netflix support soon

The Verge 2025-05-24T14:27:51-04:00
The oldest Fire TV devices are losing Netflix support soon
It’s finally time to upgrade for many owners of the earliest Amazon Fire TV devices, as Netflix is ending support for them next month, reports German outlet Heise. The cutoff for US users is June 3rd, according to ZDNet, which writes that the company has been emailing those who would be affected by the change. […]
Leer más

X is back after an apparent widespread outage

The Verge 2025-05-24T11:57:25-04:00
X is back after an apparent widespread outage
X is back up for most users after what appeared to be a significant outage that spiked early this morning around 9AM ET. Global internet monitor NetBlocks posted this morning that X “has been experiencing international outages for some users for a second time in a week,” adding that the issue isn’t “related to country-level […]
Leer más

Whoop is reportedly replacing defective MG trackers

The Verge 2025-05-24T11:43:54-04:00
Whoop is reportedly replacing defective MG trackers
Users of Whoop’s fitness trackers have been reporting that their Whoop MG fitness trackers are turning unresponsive, in some cases within under an hour of setting them up. Now, the company is apparently replacing the trackers, in some cases before the users even ask, TechIssuesToday reports. Launched alongside the Whoop 5.0 earlier this month, the […]
Leer más

Twelve South’s slick 3-in-1 charging stand has dropped to a new low price

The Verge 2025-05-24T10:31:50-04:00
Twelve South’s slick 3-in-1 charging stand has dropped to a new low price
Memorial Day marks the unofficial start of summer, and if you somehow managed to skip your spring cleaning earlier this year, the turning of the season offers a fresh chance to declutter your space. Thankfully, the Twelve South HiRise 3 Deluxe offers a stylish way to organize your desk or bedside table, and it’s currently […]
Leer más

Empleos

The Nintendo Switch 2 sure seems to work just fine with a USB mouse

The Verge 2025-05-24T17:51:55-04:00
The Nintendo Switch 2 sure seems to work just fine with a USB mouse
You’ll be able to use a USB mouse with the Nintendo Switch 2 in at least one game, as a Koei Tecmo developer commentary video for the upcoming Nobunaga’s Ambition: Awakening Complete Edition revealed this week. That’s great news if your wrists, like mine, started preemptively cramping the first time you saw video of someone […]
Leer más

The oldest Fire TV devices are losing Netflix support soon

The Verge 2025-05-24T14:27:51-04:00
The oldest Fire TV devices are losing Netflix support soon
It’s finally time to upgrade for many owners of the earliest Amazon Fire TV devices, as Netflix is ending support for them next month, reports German outlet Heise. The cutoff for US users is June 3rd, according to ZDNet, which writes that the company has been emailing those who would be affected by the change. […]
Leer más

X is back after an apparent widespread outage

The Verge 2025-05-24T11:57:25-04:00
X is back after an apparent widespread outage
X is back up for most users after what appeared to be a significant outage that spiked early this morning around 9AM ET. Global internet monitor NetBlocks posted this morning that X “has been experiencing international outages for some users for a second time in a week,” adding that the issue isn’t “related to country-level […]
Leer más

Whoop is reportedly replacing defective MG trackers

The Verge 2025-05-24T11:43:54-04:00
Whoop is reportedly replacing defective MG trackers
Users of Whoop’s fitness trackers have been reporting that their Whoop MG fitness trackers are turning unresponsive, in some cases within under an hour of setting them up. Now, the company is apparently replacing the trackers, in some cases before the users even ask, TechIssuesToday reports. Launched alongside the Whoop 5.0 earlier this month, the […]
Leer más

Twelve South’s slick 3-in-1 charging stand has dropped to a new low price

The Verge 2025-05-24T10:31:50-04:00
Twelve South’s slick 3-in-1 charging stand has dropped to a new low price
Memorial Day marks the unofficial start of summer, and if you somehow managed to skip your spring cleaning earlier this year, the turning of the season offers a fresh chance to declutter your space. Thankfully, the Twelve South HiRise 3 Deluxe offers a stylish way to organize your desk or bedside table, and it’s currently […]
Leer más