<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
        xmlns:image="http://www.google.com/schemas/sitemap-image/1.1">
  <url>
    <loc>https://kayolu.com/</loc>
    <changefreq>daily</changefreq>
    <priority>1.0</priority>
  </url>
  <url>
    <loc>https://kayolu.com/archive</loc>
    <changefreq>weekly</changefreq>
    <priority>0.6</priority>
  </url>
  <url>
    <loc>https://kayolu.com/about</loc>
    <changefreq>monthly</changefreq>
    <priority>0.5</priority>
  </url>
  
  <url>
    <loc>https://kayolu.com/posts/scheduling-algorithms-for-multiprogramming-in-a-hard-real-time-environment</loc>
    <lastmod>2026-04-24</lastmod>
    <changefreq>monthly</changefreq>
    <priority>0.8</priority>
    
    <image:image>
      <image:loc>https://cdn.kayolu.com/uploads/1776952916707-cover-1776952915234.webp</image:loc>
      <image:title>論文解析 | Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment</image:title>
      <image:caption>#Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment

Author


C. L. Liu, Project MAC, Massachusetts Institute of Technology
Hames W. Layland, Jet Propulsion Laboratory,…</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://kayolu.com/posts/exploring-ai-machine-learning-concepts-and-applications</loc>
    <lastmod>2026-04-25</lastmod>
    <changefreq>monthly</changefreq>
    <priority>0.8</priority>
    
    <image:image>
      <image:loc>https://cdn.kayolu.com/ai-generated/67a60320-e789-478c-8ee0-308e1285a885.png</image:loc>
      <image:title>李宏毅機器學習筆記 | L0-2 | 概念與應用深度解讀</image:title>
      <image:caption>深入了解AI及機器學習的基本概念與應用，包括迴歸問題、模型選擇與誤差來源分析。</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://kayolu.com/posts/understanding-gradient-descent-and-classification-models</loc>
    <lastmod>2026-04-25</lastmod>
    <changefreq>monthly</changefreq>
    <priority>0.8</priority>
    
    <image:image>
      <image:loc>https://cdn.kayolu.com/ai-generated/9b2e20f5-1fe1-4892-8cfd-1c19126acffb.png</image:loc>
      <image:title>李宏毅機器學習筆記 | L3-4 | 深入淺出：了解梯度下降與分類模型的奧秘</image:title>
      <image:caption>探索梯度下降、學習率適應、Stochastic Gradient Descent以及分類模型的詳細機制，並掌握如何優化機率生成模型以提高分類準確性。</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://kayolu.com/posts/l3-4-logistic-regression</loc>
    <lastmod>2026-04-25</lastmod>
    <changefreq>monthly</changefreq>
    <priority>0.8</priority>
    
    <image:image>
      <image:loc>https://cdn.kayolu.com/ai-generated/efc8589c-e7c7-4d8b-a04e-9315a1e193d9.png</image:loc>
      <image:title>李宏毅機器學習筆記 | L5 | Logistic Regression簡明指南</image:title>
      <image:caption>這篇文章詳細介紹了Logistic Regression的數學基礎、訓練方法和應用，並比較了Discriminative與Generative模型。</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://kayolu.com/posts/l6-neural-network</loc>
    <lastmod>2026-04-25</lastmod>
    <changefreq>monthly</changefreq>
    <priority>0.8</priority>
    
    <image:image>
      <image:loc>https://cdn.kayolu.com/ai-generated/d672ad94-a9ad-4f95-840b-5f459a9fd920.png</image:loc>
      <image:title>李宏毅機器學習筆記 | L6 |  Neural Network 神經網路全面探索之旅</image:title>
      <image:caption>探索深度學習的歷史起源，從感知器到當代的多層神經網路，並了解Deep Learning如何影響影像辨識與語音辨識。</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://kayolu.com/posts/l7</loc>
    <lastmod>2026-04-25</lastmod>
    <changefreq>monthly</changefreq>
    <priority>0.8</priority>
    
    <image:image>
      <image:loc>https://cdn.kayolu.com/ai-generated/b271b156-206e-47e3-a07b-9852e79b7663.png</image:loc>
      <image:title>李宏毅機器學習筆記 | L7 | 理解反向傳播的核心機制</image:title>
      <image:caption>本文深入探討了反向傳播演算法背後的運作機制，解釋如何利用鏈式法則和梯度下降來有效地優化神經網絡模型。</image:caption>
    </image:image>
  </url>
  <url>
    <loc>https://kayolu.com/posts/deep-learning-training-tips</loc>
    <lastmod>2026-04-25</lastmod>
    <changefreq>monthly</changefreq>
    <priority>0.8</priority>
    
    <image:image>
      <image:loc>https://cdn.kayolu.com/ai-generated/4bd30386-9d42-4269-8efb-9a4d540cc22e.png</image:loc>
      <image:title>李宏毅機器學習筆記 | L9 | 訓練深度神經網路DNN的技巧</image:title>
      <image:caption>本篇內容整理了深度神經網絡（DNN）的訓練技巧，包括對抗過擬合的方法、新激活函數的選擇、自適應學習率及使用Dropout等正則化技術。</image:caption>
    </image:image>
  </url>
</urlset>