altolending

Survival Analysis When No One Dies: A Value-Based Approach

Survival Analysis is a statistical approach used to answer the question: “How long will something last?” That “something” could range from a patient’s lifespan to the durability of a machine component or the duration of a user’s subscription. One of the most widely used tools in this area is the Kaplan-Meier estimator. Born in the

Survival Analysis When No One Dies: A Value-Based Approach Read More »

Get Started with Rust: Installation and Your First CLI Tool – A Beginner’s Guide

Rust has become a popular programming language in recent years as it combines security and high performance and can be used in many applications. It combines the positive characteristics of C and C++ with the modern syntax and simplicity of other programming languages such as Python. In this article, we will take a step-by-step look

Get Started with Rust: Installation and Your First CLI Tool – A Beginner’s Guide Read More »

Non-Parametric Density Estimation: Theory and Applications

In this article, we’ll talk about what Density Estimation is and the role it plays in statistical analysis. We’ll analyze two popular density estimation methods, histograms and kernel density estimators, and analyze their theoretical properties as well as how they perform in practice. Finally, we’ll look at how density estimation may be used as a

Non-Parametric Density Estimation: Theory and Applications Read More »

Rethinking the Environmental Costs of Training AI — Why We Should Look Beyond Hardware

Summary of This Study Hardware choices – specifically hardware type and its quantity – along with training time, have a significant positive impact on energy, water, and carbon footprints during AI model training, whereas architecture-related factors do not. The interaction between hardware quantity and training time slows the growth of energy, water, and carbon consumption

Rethinking the Environmental Costs of Training AI — Why We Should Look Beyond Hardware Read More »

Empowering LLMs to Think Deeper by Erasing Thoughts

Introduction Recent large language models (LLMs) — such as OpenAI’s o1/o3, DeepSeek’s R1 and Anthropic’s Claude 3.7 — demonstrate that allowing the model to think deeper and longer at test time can significantly enhance model’s reasoning capability. The core approach underlying their deep thinking capability is called chain-of-thought (CoT), where the model iteratively generates intermediate

Empowering LLMs to Think Deeper by Erasing Thoughts Read More »

How I Finally Understood MCP — and Got It Working in Real Life

Table of Content Introduction: Why I Wrote This The Evolution of Tool Integration with LLMs What Is Model Context Protocol (MCP), Really? Wait, MCP sounds like RAG… but is it? In an MCP-based setup In a traditional RAG system Traditional RAG Implementation MCP Implementation Quick recap! Core Capabilities of an MCP Server Real-World Example: Claude

How I Finally Understood MCP — and Got It Working in Real Life Read More »

The Westworld Blunder

We’re entering an interesting moment in AI development. AI systems are getting memory, reasoning chains, self-critiques, and long-context recall. These capabilities are exactly some of the things that I’ve previously written would be prerequisites for an AI system to be conscious. Just to be clear, I don’t believe today’s AI systems are self-aware, but I no longer

The Westworld Blunder Read More »

Pause Your ML Pipelines for Human Review Using AWS Step Functions + Slack

Have you ever wanted to pause an automated workflow to wait for a human decision? Maybe you need approval before provisioning cloud resources, promoting a machine learning model to production, or charging a customer’s credit card. In many data science and machine learning workflows, automation gets you 90% of the way — but that critical last

Pause Your ML Pipelines for Human Review Using AWS Step Functions + Slack Read More »