<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Whackd]]></title><description><![CDATA[Thoughts, research and ideas.]]></description><link>https://whackd.in/</link><generator>Ghost 5.68</generator><lastBuildDate>Tue, 07 Apr 2026 05:55:07 GMT</lastBuildDate><atom:link href="https://whackd.in/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[The AI Revolution Accelerates: Tech Giants Race to Define Our Future!]]></title><description><![CDATA[The pace of AI innovation has reached a fever pitch, and the world's leading tech giants are responding with unprecedented speed, pushing the boundaries of what's possible and fiercely competing to shape the future of technology.]]></description><link>https://whackd.in/the-ai-revolution-accelerates-tech-giants-race-to-define-our-future/</link><guid isPermaLink="false">682ea1e194f2f605adc4d43e</guid><category><![CDATA[agent]]></category><category><![CDATA[agentic]]></category><category><![CDATA[AI]]></category><category><![CDATA[jules]]></category><category><![CDATA[Gemini]]></category><category><![CDATA[Mariner]]></category><category><![CDATA[Google]]></category><category><![CDATA[googleio]]></category><category><![CDATA[i/o]]></category><category><![CDATA[veo]]></category><category><![CDATA[stitch]]></category><category><![CDATA[builder]]></category><category><![CDATA[ui]]></category><category><![CDATA[ux]]></category><category><![CDATA[ainews]]></category><dc:creator><![CDATA[Rohit Pal]]></dc:creator><pubDate>Thu, 22 May 2025 05:11:37 GMT</pubDate><media:content url="https://whackd.in/content/images/2025/05/Screenshot-2025-05-22-at-9.35.47-AM.png" medium="image"/><content:encoded><![CDATA[<img src="https://whackd.in/content/images/2025/05/Screenshot-2025-05-22-at-9.35.47-AM.png" alt="The AI Revolution Accelerates: Tech Giants Race to Define Our Future!"><p>Google I/O 2025 has once again demonstrated this intense drive, unveiling breathtaking advancements that amplify the industry&apos;s collective push towards a more intelligent, integrated, and intuitive digital experience.</p><p>Indeed, as Microsoft CEO Satya Nadella famously said he wanted to &quot;make Google dance,&quot; Google I/O 2025 proved that a company celebrating its 26th year can still bust a move or two on the dance floor. Hot on the heels of Microsoft Build 2025, where the Redmond giant championed the &quot;age of AI agents&quot; and the &quot;open agentic web,&quot; Google&apos;s latest showcase cemented the pervasive future of AI. This isn&apos;t just about incremental updates; it&apos;s about a fundamental shift in how we interact with our digital world, with AI becoming the core of everything we do. The competition is fierce, but the shared vision for an AI-first era is clear, paving the way for innovations that will transform our daily lives.</p><hr><h3 id="google-io-2025s-key-highlights-and-parallels-with-microsofts-vision">Google I/O 2025&apos;s Key Highlights and Parallels with Microsoft&apos;s Vision</h3><p>Google&apos;s keynote was a masterclass in AI integration, demonstrating how Gemini is evolving into a truly universal AI assistant. Here are some of the <strong>key highlights</strong> that parallel Microsoft&apos;s vision:</p><h3 id="ambient-ai-and-real-time-interaction">Ambient AI and Real-time Interaction</h3><p><strong>Gemini Live (Project Astra):</strong> Google&apos;s real-time, camera and screen-sharing AI interaction is a stride towards ambient computing. It allows Gemini to perceive and respond to your surroundings, providing information and assistance on the fly. This mirrors Microsoft&apos;s continuous efforts to integrate Copilot deeply within Windows and Microsoft 365, aiming for AI to be a seamless part of your workflow and environment. Real-time translation capabilities are also coming to Google Meet, breaking down language barriers.</p><h3 id="autonomous-agents-and-task-automation">Autonomous Agents and Task Automation</h3><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://whackd.in/content/images/2025/05/Screenshot-2025-05-22-at-9.36.58-AM-1.png" width="2000" height="1121" loading="lazy" alt="The AI Revolution Accelerates: Tech Giants Race to Define Our Future!" srcset="https://whackd.in/content/images/size/w600/2025/05/Screenshot-2025-05-22-at-9.36.58-AM-1.png 600w, https://whackd.in/content/images/size/w1000/2025/05/Screenshot-2025-05-22-at-9.36.58-AM-1.png 1000w, https://whackd.in/content/images/size/w1600/2025/05/Screenshot-2025-05-22-at-9.36.58-AM-1.png 1600w, https://whackd.in/content/images/size/w2400/2025/05/Screenshot-2025-05-22-at-9.36.58-AM-1.png 2400w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://whackd.in/content/images/2025/05/Screenshot-2025-05-22-at-9.37.04-AM-1.png" width="2000" height="1148" loading="lazy" alt="The AI Revolution Accelerates: Tech Giants Race to Define Our Future!" srcset="https://whackd.in/content/images/size/w600/2025/05/Screenshot-2025-05-22-at-9.37.04-AM-1.png 600w, https://whackd.in/content/images/size/w1000/2025/05/Screenshot-2025-05-22-at-9.37.04-AM-1.png 1000w, https://whackd.in/content/images/size/w1600/2025/05/Screenshot-2025-05-22-at-9.37.04-AM-1.png 1600w, https://whackd.in/content/images/size/w2400/2025/05/Screenshot-2025-05-22-at-9.37.04-AM-1.png 2400w" sizes="(min-width: 720px) 720px"></div></div></div><figcaption><p><span style="white-space: pre-wrap;">Project Mariner</span></p></figcaption></figure><p><strong>Project Mariner and Agent Mode:</strong> Google introduced an agent capable of web interaction, multitasking (up to 10 simultaneous tasks!), and learning from demonstrations (&quot;teach and repeat&quot;). The experimental Agent Mode in the Gemini App automates tasks like finding apartments and scheduling tours. This directly aligns with Microsoft&apos;s heavy emphasis on &quot;<strong>agentic AI</strong>&quot; at Build 2025, where new tools to build advanced agentic applications were unveiled.</p><h3 id="reimagined-search-and-information-access">Reimagined Search and Information Access</h3><p><strong>AI in Search: A Reimagined Experience:</strong> Google&apos;s &quot;All-new AI Mode&quot; offers advanced reasoning for complex queries, personalized suggestions, and &quot;Deep Search&quot; that creates expert-level reports. Rolling out widely, this AI mode will provide GPT / Perplexity-like answers directly in Google Search, marking a significant shift in how we find information and potentially signaling &quot;an end of an era for the web.&quot;</p><h3 id="foundational-infrastructure-and-developer-tools">Foundational Infrastructure and Developer Tools</h3><p><strong>Foundation and Infrastructure:</strong> Google&apos;s <strong>7th Generation TPU Ironwood</strong> delivers 10x performance, emphasizing the critical need for powerful underlying hardware for AI. This resonates with Microsoft&apos;s continuous investments in Azure infrastructure and the introduction of <strong>Windows AI Foundry</strong>, a unified platform for AI development.</p><h3 id="ai-for-creativity-and-content-generation">AI for Creativity and Content Generation</h3><p><strong>Creative Tools and Models:</strong> Google unveiled <strong>Imagen 4</strong> for enhanced image generation&#x2014;noted as the #2 best image generation model and the best for speed, particularly excelling in typography. They also showcased <strong>Veo 3</strong> for state-of-the-art photorealistic video generation (now with integrated audio), and <strong>Lyria 2</strong> for high-fidelity music creation, alongside <strong>SynthID Detector</strong> for watermarking. This vibrant ecosystem for AI-powered creativity aligns with Microsoft&apos;s broad approach to empowering creators and developers with tools and model offerings. The filmmaking tool, Flow, further enhances video creation by allowing for consistent characters and sound effects.</p><h3 id="new-frontiers-in-human-computer-interaction">New Frontiers in Human-Computer Interaction</h3><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2025/05/Screenshot-2025-05-22-at-9.36.06-AM.png" class="kg-image" alt="The AI Revolution Accelerates: Tech Giants Race to Define Our Future!" loading="lazy" width="2000" height="1075" srcset="https://whackd.in/content/images/size/w600/2025/05/Screenshot-2025-05-22-at-9.36.06-AM.png 600w, https://whackd.in/content/images/size/w1000/2025/05/Screenshot-2025-05-22-at-9.36.06-AM.png 1000w, https://whackd.in/content/images/size/w1600/2025/05/Screenshot-2025-05-22-at-9.36.06-AM.png 1600w, https://whackd.in/content/images/size/w2400/2025/05/Screenshot-2025-05-22-at-9.36.06-AM.png 2400w" sizes="(min-width: 720px) 720px"></figure><p><strong>Android XR:</strong> The integration of Gemini into XR devices and partnerships with Samsung and Qualcomm to develop Android XR, including concepts like &quot;<strong>Android XR Glasses</strong>&quot; for hands-free AI interaction, speaks volumes about the future of human-computer interaction. This parallels Microsoft&apos;s long-standing commitment to mixed reality and exploration of new hardware paradigms. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://whackd.in/content/images/2025/05/Screenshot-2025-05-22-at-9.40.53-AM.png" class="kg-image" alt="The AI Revolution Accelerates: Tech Giants Race to Define Our Future!" loading="lazy" width="2000" height="1009" srcset="https://whackd.in/content/images/size/w600/2025/05/Screenshot-2025-05-22-at-9.40.53-AM.png 600w, https://whackd.in/content/images/size/w1000/2025/05/Screenshot-2025-05-22-at-9.40.53-AM.png 1000w, https://whackd.in/content/images/size/w1600/2025/05/Screenshot-2025-05-22-at-9.40.53-AM.png 1600w, https://whackd.in/content/images/size/w2400/2025/05/Screenshot-2025-05-22-at-9.40.53-AM.png 2400w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Virtual Try-On</span></figcaption></figure><p><strong>Virtual try-on for shopping</strong>, allowing users to virtually try on clothes with just a full-body picture and offering <strong>better quality than ever before</strong>, also promises to revolutionize retail.</p><hr><h3 id="what-sets-google-apart-unique-innovations-and-approaches">What Sets Google Apart: Unique Innovations and Approaches</h3><p>While many themes overlap, Google I/O 2025 showcased several areas where Google&apos;s approach offers distinct innovations or pushes boundaries in unique ways:</p><h4 id="multimodal-real-time-world-interaction-project-astras-depth">Multimodal, Real-time World Interaction (Project Astra&apos;s Depth)</h4><p>While Microsoft is integrating AI, Google&apos;s Project Astra, particularly the &quot;live&quot; capabilities that allow Gemini to process and respond to real-time video feeds from your environment, showcases a deeper dive into ambient, context-aware AI interaction that feels particularly advanced in its immediacy and responsiveness to the physical world. This goes beyond simple image recognition to real-time conversational understanding of dynamic visual and auditory input.</p><h4 id="cutting-edge-model-performance-deep-reasoning-specialized-models">Cutting-Edge Model Performance (Deep Reasoning &amp; Specialized Models)</h4><p>Google&apos;s best LLM now features a deeper reasoning mode, allowing it to search multiple hypotheses before making decisions. This model is state-of-the-art on multimodal benchmarks (MMMU), code generation (LiveCodeBench), and achieved 2x the performance of the next best on the USAMO 2025 (math) challenge, demonstrating unparalleled intellectual capabilities. Additionally, the mention of &quot;Gemini Diffusion&quot; as a groundbreaking model that is 10-15x faster than autoregressive models for generating code by utilizing diffusion, a technique previously primarily used for images, marks a significant leap in the speed and efficiency of AI-powered software development.</p><h4 id="ai-for-software-engineering-design-transformation">AI for Software Engineering &amp; Design Transformation</h4><p><strong>Stitch (UI/UX Design)</strong></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://whackd.in/content/images/2025/05/google-stitch-ss-scaled--1-.png" class="kg-image" alt="The AI Revolution Accelerates: Tech Giants Race to Define Our Future!" loading="lazy" width="2000" height="1125" srcset="https://whackd.in/content/images/size/w600/2025/05/google-stitch-ss-scaled--1-.png 600w, https://whackd.in/content/images/size/w1000/2025/05/google-stitch-ss-scaled--1-.png 1000w, https://whackd.in/content/images/size/w1600/2025/05/google-stitch-ss-scaled--1-.png 1600w, https://whackd.in/content/images/size/w2400/2025/05/google-stitch-ss-scaled--1-.png 2400w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Stitch UI/UX Designer</span></figcaption></figure><p>Google acquired Stitch, a startup that enables iterative UI design directly from prompts, with the ability to download designs into Figma. This signifies Google&apos;s bold move into AI-powered design automation.</p><p><strong>Jules (AI Software Engineer)</strong></p><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://whackd.in/content/images/2025/05/Screenshot-2025-05-22-at-9.38.59-AM.png" width="2000" height="1186" loading="lazy" alt="The AI Revolution Accelerates: Tech Giants Race to Define Our Future!" srcset="https://whackd.in/content/images/size/w600/2025/05/Screenshot-2025-05-22-at-9.38.59-AM.png 600w, https://whackd.in/content/images/size/w1000/2025/05/Screenshot-2025-05-22-at-9.38.59-AM.png 1000w, https://whackd.in/content/images/size/w1600/2025/05/Screenshot-2025-05-22-at-9.38.59-AM.png 1600w, https://whackd.in/content/images/size/w2400/2025/05/Screenshot-2025-05-22-at-9.38.59-AM.png 2400w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://whackd.in/content/images/2025/05/Screenshot-2025-05-22-at-9.39.02-AM.png" width="2000" height="1109" loading="lazy" alt="The AI Revolution Accelerates: Tech Giants Race to Define Our Future!" srcset="https://whackd.in/content/images/size/w600/2025/05/Screenshot-2025-05-22-at-9.39.02-AM.png 600w, https://whackd.in/content/images/size/w1000/2025/05/Screenshot-2025-05-22-at-9.39.02-AM.png 1000w, https://whackd.in/content/images/size/w1600/2025/05/Screenshot-2025-05-22-at-9.39.02-AM.png 1600w, https://whackd.in/content/images/size/w2400/2025/05/Screenshot-2025-05-22-at-9.39.02-AM.png 2400w" sizes="(min-width: 720px) 720px"></div></div><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://whackd.in/content/images/2025/05/Screenshot-2025-05-22-at-9.39.09-AM.png" width="2000" height="1118" loading="lazy" alt="The AI Revolution Accelerates: Tech Giants Race to Define Our Future!" srcset="https://whackd.in/content/images/size/w600/2025/05/Screenshot-2025-05-22-at-9.39.09-AM.png 600w, https://whackd.in/content/images/size/w1000/2025/05/Screenshot-2025-05-22-at-9.39.09-AM.png 1000w, https://whackd.in/content/images/size/w1600/2025/05/Screenshot-2025-05-22-at-9.39.09-AM.png 1600w, https://whackd.in/content/images/size/w2400/2025/05/Screenshot-2025-05-22-at-9.39.09-AM.png 2400w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://whackd.in/content/images/2025/05/Screenshot-2025-05-22-at-9.39.32-AM.png" width="2000" height="1288" loading="lazy" alt="The AI Revolution Accelerates: Tech Giants Race to Define Our Future!" srcset="https://whackd.in/content/images/size/w600/2025/05/Screenshot-2025-05-22-at-9.39.32-AM.png 600w, https://whackd.in/content/images/size/w1000/2025/05/Screenshot-2025-05-22-at-9.39.32-AM.png 1000w, https://whackd.in/content/images/size/w1600/2025/05/Screenshot-2025-05-22-at-9.39.32-AM.png 1600w, https://whackd.in/content/images/size/w2400/2025/05/Screenshot-2025-05-22-at-9.39.32-AM.png 2400w" sizes="(min-width: 720px) 720px"></div></div></div><figcaption><p><span style="white-space: pre-wrap;">Jules AI programmer</span></p></figcaption></figure><p>Jules is an innovative app that allows users to make changes to their GitHub repositories using simple English prompts, without even needing to clone the repo to their local machine &#x2013; all through a simple UI. This represents a significant step towards a more accessible and intuitive AI software engineering experience.</p><h4 id="integrated-video-audio-generation-veo-3-and-comprehensive-ai-safety">Integrated Video &amp; Audio Generation (Veo 3) and Comprehensive AI Safety</h4><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://whackd.in/content/images/2025/05/Screenshot-2025-05-22-at-9.41.51-AM-1.png" width="2000" height="1172" loading="lazy" alt="The AI Revolution Accelerates: Tech Giants Race to Define Our Future!" srcset="https://whackd.in/content/images/size/w600/2025/05/Screenshot-2025-05-22-at-9.41.51-AM-1.png 600w, https://whackd.in/content/images/size/w1000/2025/05/Screenshot-2025-05-22-at-9.41.51-AM-1.png 1000w, https://whackd.in/content/images/size/w1600/2025/05/Screenshot-2025-05-22-at-9.41.51-AM-1.png 1600w, https://whackd.in/content/images/size/w2400/2025/05/Screenshot-2025-05-22-at-9.41.51-AM-1.png 2400w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://whackd.in/content/images/2025/05/Screenshot-2025-05-22-at-9.41.55-AM-2.png" width="2000" height="1132" loading="lazy" alt="The AI Revolution Accelerates: Tech Giants Race to Define Our Future!" srcset="https://whackd.in/content/images/size/w600/2025/05/Screenshot-2025-05-22-at-9.41.55-AM-2.png 600w, https://whackd.in/content/images/size/w1000/2025/05/Screenshot-2025-05-22-at-9.41.55-AM-2.png 1000w, https://whackd.in/content/images/size/w1600/2025/05/Screenshot-2025-05-22-at-9.41.55-AM-2.png 1600w, https://whackd.in/content/images/size/w2400/2025/05/Screenshot-2025-05-22-at-9.41.55-AM-2.png 2400w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://whackd.in/content/images/2025/05/Screenshot-2025-05-22-at-9.42.02-AM-2.png" width="2000" height="1155" loading="lazy" alt="The AI Revolution Accelerates: Tech Giants Race to Define Our Future!" srcset="https://whackd.in/content/images/size/w600/2025/05/Screenshot-2025-05-22-at-9.42.02-AM-2.png 600w, https://whackd.in/content/images/size/w1000/2025/05/Screenshot-2025-05-22-at-9.42.02-AM-2.png 1000w, https://whackd.in/content/images/size/w1600/2025/05/Screenshot-2025-05-22-at-9.42.02-AM-2.png 1600w, https://whackd.in/content/images/size/w2400/2025/05/Screenshot-2025-05-22-at-9.42.02-AM-2.png 2400w" sizes="(min-width: 720px) 720px"></div></div><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://whackd.in/content/images/2025/05/Screenshot-2025-05-22-at-9.42.11-AM-2.png" width="2000" height="1092" loading="lazy" alt="The AI Revolution Accelerates: Tech Giants Race to Define Our Future!" srcset="https://whackd.in/content/images/size/w600/2025/05/Screenshot-2025-05-22-at-9.42.11-AM-2.png 600w, https://whackd.in/content/images/size/w1000/2025/05/Screenshot-2025-05-22-at-9.42.11-AM-2.png 1000w, https://whackd.in/content/images/size/w1600/2025/05/Screenshot-2025-05-22-at-9.42.11-AM-2.png 1600w, https://whackd.in/content/images/size/w2400/2025/05/Screenshot-2025-05-22-at-9.42.11-AM-2.png 2400w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://whackd.in/content/images/2025/05/Screenshot-2025-05-22-at-9.42.28-AM-2.png" width="2000" height="1032" loading="lazy" alt="The AI Revolution Accelerates: Tech Giants Race to Define Our Future!" srcset="https://whackd.in/content/images/size/w600/2025/05/Screenshot-2025-05-22-at-9.42.28-AM-2.png 600w, https://whackd.in/content/images/size/w1000/2025/05/Screenshot-2025-05-22-at-9.42.28-AM-2.png 1000w, https://whackd.in/content/images/size/w1600/2025/05/Screenshot-2025-05-22-at-9.42.28-AM-2.png 1600w, https://whackd.in/content/images/size/w2400/2025/05/Screenshot-2025-05-22-at-9.42.28-AM-2.png 2400w" sizes="(min-width: 720px) 720px"></div></div></div><figcaption><p><span style="white-space: pre-wrap;">Veo 3 Demo</span></p></figcaption></figure><p>Google&apos;s <strong>Veo 3</strong> stands out by natively generating high-quality video <em>with integrated sound effects, background noises, and even dialogue</em>. This integrated audio-visual capability, coupled with the <strong>SynthID Detector</strong> for invisible watermarks across various media (image, audio, text, video), represents a critical and forward-thinking innovation for AI safety and provenance, arguably more comprehensive in its stated application across media types.</p><h4 id="thinking-budgets-for-model-control">&quot;Thinking Budgets&quot; for Model Control</h4><p>The introduction of &quot;Thinking Budgets&quot; for Gemini 2.5 Pro, offering developers control over cost and latency versus quality, is a novel approach to managing complex AI model deployments. This granular control over the model&apos;s &quot;thinking&quot; process could be a significant differentiator for developers optimizing AI applications, potentially leading to more efficient and sustainable AI solutions.</p><h4 id="android-xr-and-lightweight-glasses-for-daily-use">Android XR and Lightweight Glasses for Daily Use</h4><p>While both companies are investing in XR, Google&apos;s specific focus on lightweight Android XR glasses with in-lens displays, cameras, and microphones, in partnership with fashion brands like Gentle Monster and Warby Parker, suggests a strategic pathway towards more consumer-friendly and ubiquitous AI-powered wearables that integrate seamlessly into daily life, aiming for widespread adoption beyond industrial or specialized use cases.</p><hr><h3 id="the-agentic-future-is-here">The Agentic Future is Here</h3><p>With over 400 million monthly active users for Gemini and processing 480 trillion tokens a month, Google is demonstrating immense scale and leadership in the AI space. Both Google I/O 2025 and Microsoft Build 2025 have made it abundantly clear: AI is no longer just a feature; it&apos;s the core of how we will interact with technology. From intelligent agents automating complex tasks to AI seamlessly integrated into our devices and surroundings, the future is about more intuitive, proactive, and personalized digital experiences.</p><p>The parallels between these two tech giants&apos; announcements are striking, indicating a shared vision for an AI-first world. As these advancements roll out, we can expect a truly exciting era of innovation that will fundamentally change how we work, create, and connect. The race to build the ultimate AI companion is well underway, and we, the users, are the ultimate beneficiaries.</p>]]></content:encoded></item><item><title><![CDATA[The AI Horizon Expands: Microsoft's Bold Moves in the Agentic Era]]></title><description><![CDATA[The air at Microsoft Build 2025 crackled with more than just electricity; it hummed with the potential of intelligent automation. This year's conference wasn't just about incremental updates; it felt like a definitive leap into the age of the AI agent.]]></description><link>https://whackd.in/microsofts-bold-moves-in-the-agentic-era/</link><guid isPermaLink="false">682bfe1cde720b060795ed66</guid><category><![CDATA[AI]]></category><category><![CDATA[agentic]]></category><category><![CDATA[agent]]></category><category><![CDATA[openai]]></category><category><![CDATA[grok]]></category><category><![CDATA[mistral]]></category><category><![CDATA[llama]]></category><category><![CDATA[copilot]]></category><category><![CDATA[peer]]></category><category><![CDATA[coder]]></category><category><![CDATA[vibe coding]]></category><category><![CDATA[NLWeb]]></category><category><![CDATA[MCP]]></category><category><![CDATA[Azure]]></category><category><![CDATA[ainews]]></category><dc:creator><![CDATA[Rohit Pal]]></dc:creator><pubDate>Tue, 20 May 2025 04:26:03 GMT</pubDate><media:content url="https://whackd.in/content/images/2025/05/Screenshot-2025-05-20-at-9.47.01-AM.png" medium="image"/><content:encoded><![CDATA[<img src="https://whackd.in/content/images/2025/05/Screenshot-2025-05-20-at-9.47.01-AM.png" alt="The AI Horizon Expands: Microsoft&apos;s Bold Moves in the Agentic Era"><p>Microsoft is betting big on AI that <em>acts</em>, that <em>orchestrates</em>, that becomes an integral part of our digital workflows.</p><p>The headline grabber? The pervasive focus on <strong>AI agents</strong>. Microsoft&apos;s own Work Trend Index reveals a significant appetite among business leaders for these digital collaborators, and Build delivered the tools to make this vision a reality. We&apos;re talking about AI that doesn&apos;t just respond to prompts but proactively tackles tasks, learns from context, and even collaborates with other agents to achieve complex objectives. Imagine workflows that self-optimize, projects that manage themselves, and development cycles that anticipate needs &#x2013; this is the direction Microsoft is heading.</p><p>But the agentic future needs a robust foundation, and that&apos;s where <strong>Azure AI Foundry</strong> steps into the spotlight. Think of it as the bedrock for this new era of AI. It&apos;s not just a marketplace; it&apos;s an integrated environment for crafting, tailoring, and deploying sophisticated AI applications and, crucially, these autonomous agents.</p><p></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://whackd.in/content/images/2025/05/image.png" class="kg-image" alt="The AI Horizon Expands: Microsoft&apos;s Bold Moves in the Agentic Era" loading="lazy" width="2000" height="904" srcset="https://whackd.in/content/images/size/w600/2025/05/image.png 600w, https://whackd.in/content/images/size/w1000/2025/05/image.png 1000w, https://whackd.in/content/images/size/w1600/2025/05/image.png 1600w, https://whackd.in/content/images/size/w2400/2025/05/image.png 2400w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Azure AI Foundry</span></figcaption></figure><h3 id="azure-ai-foundrya-unified-platform-for-ai-development-and-agent-management">Azure AI Foundry - A Unified Platform for AI Development and Agent Management</h3><p>Microsoft unveiled <strong>Azure AI Foundry</strong>, a comprehensive platform designed for developers to build, customize, and manage the next generation of AI applications and intelligent agents. This unified environment aims to streamline the AI development lifecycle, offering a &quot;production line for intelligence&quot; supporting a variety of models.</p><h3 id="azure-ai-foundry-modelsa-diverse-ecosystem-of-cutting-edge-ai">Azure AI Foundry Models - A Diverse Ecosystem of Cutting-Edge AI</h3><p>The platform now features <strong>Azure AI Foundry Models</strong>, significantly expanding the range of AI models available on Azure. This comprehensive catalog includes leading models from various providers, offering developers a wide array of capabilities to choose from. Notably, this includes the integration of <strong>Grok 3 and Grok 3 Mini models from xAI</strong>, Elon Musk&apos;s AI company. Beyond xAI, Azure AI Foundry provides access to models from <strong>OpenAI (including the latest GPT-4o), Meta (Llama family), Mistral AI, Cohere, and Stability AI</strong>, among others. This diverse selection ensures that developers can leverage the best-suited intelligence for their specific applications, all hosted and billed directly through Microsoft Azure. <strong>Azure&apos;s new model router automatically chooses the best OpenAI model for a given task</strong>, further optimizing performance. The platform boasts access to over 1,900 partner and Microsoft-hosted models.</p><h3 id="streamlining-agent-development-and-management-in-azure-ai-foundry">Streamlining Agent Development and Management in Azure AI Foundry</h3><p>Microsoft is providing developers with a suite of tools to simplify the creation and management of AI agents within Azure AI Foundry. The <strong>Foundry Agent Service allows building declarative agents with just a few lines of code</strong>, streamlining the development process. Agents built in Foundry and Copilot Studio automatically appear in an agent directory in Entra. To help developers navigate the vast model ecosystem, Microsoft introduced the <strong>Model Leaderboard</strong> for performance comparison and the <strong>Model Router</strong> for intelligent, real-time model selection based on specific needs. Ensuring the reliability and efficiency of AI agents, Microsoft is also adding <strong>Azure AI Foundry Observability</strong> features. These built-in monitoring tools will track agent performance, quality, cost, and safety.</p><h3 id="enhancing-microsoft-365-copilot-with-customization-and-collaboration">Enhancing Microsoft 365 Copilot with Customization and Collaboration</h3><p>Microsoft is significantly boosting the capabilities of Microsoft 365 Copilot. The introduction of <strong>Microsoft 365 Copilot Tuning</strong> will allow organizations to train models and build custom AI agents using their own data, workflows, and processes in a low-code environment, tailoring the AI to their specific needs. Furthermore, Microsoft is unveiling <strong>multi-agent orchestration</strong> within Copilot, enabling the creation of systems where multiple AI agents can collaborate to tackle more complex tasks and achieve broader organizational goals, transforming how teams work together.</p><h3 id="github-copilot-from-pair-programmer-to-autonomous-agent-%E2%80%93-embracing-vibe-coding">GitHub Copilot: From Pair Programmer to Autonomous Agent &#x2013; Embracing &quot;Vibe Coding&quot;</h3><p>For developers, <strong>GitHub Copilot</strong> is rapidly evolving from a helpful pair programmer to a more autonomous agent within the development workflow. Microsoft is deeply integrating Copilot extensions into the core open-source repository, and the introduction of <strong>Agent mode in Visual Studio Code</strong> promises a more context-aware coding assistant. This enhanced Copilot can now leverage information from multiple files and data sources to provide more intelligent suggestions and even proactively address development tasks, fostering what some are calling &quot;<strong>vibe coding</strong>&quot; &#x2013; a more fluid and intuitive development experience where the AI seamlessly anticipates and assists your coding flow.</p><p>Perhaps one of the most compelling moments of Microsoft Build was <strong>Satya Nadella&apos;s live demonstration of GitHub Copilot in action.</strong> In a powerful display of its evolving capabilities, Copilot was assigned a real-world pull request (PR) task directly within GitHub. The AI agent didn&apos;t just suggest code; it analyzed the issue, proposed the necessary code changes, implemented them, and even generated the pull request itself &#x2013; all in real-time. This wasn&apos;t a scripted scenario; it was a tangible glimpse into a future where AI seamlessly integrates into the development workflow, automating even intricate tasks like code contributions and PR management. <strong>GitHub Copilot is becoming central to how we code</strong>, moving beyond a simple assistant to a true peer programmer capable of handling significant development responsibilities.</p><h3 id="copilot-edits-for-natural-language-code-modifications">Copilot Edits for Natural Language Code Modifications</h3><p>Streamlining the code modification process, <strong>Copilot Edits</strong> will allow developers to make inline code changes across multiple files using natural language prompts, all while maintaining developer control over the applied changes.</p><h3 id="enhanced-visual-studio-and-vs-code">Enhanced Visual Studio and VS Code</h3><p>Microsoft is also improving its core development tools. <strong>Visual Studio</strong> is getting better with .NET 10 support, live preview at design time, improved Git tooling, and a new debugger for cross-platform apps. Stable releases will now occur monthly. <strong>VS Code</strong> received improved multi-window support and easier staging directly from the editor in its 100th release. Furthermore, Microsoft reiterated its commitment to the open-source community by <strong>announcing plans to open-source even more components of Visual Studio Code</strong>, building on its already significant open foundation. This move aims to foster greater collaboration and extensibility within the developer ecosystem.</p><h3 id="nlweb-the-html-of-the-agentic-web">NLWeb: The HTML of the Agentic Web</h3><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://whackd.in/content/images/2025/05/image.jpg" class="kg-image" alt="The AI Horizon Expands: Microsoft&apos;s Bold Moves in the Agentic Era" loading="lazy" width="2000" height="1125" srcset="https://whackd.in/content/images/size/w600/2025/05/image.jpg 600w, https://whackd.in/content/images/size/w1000/2025/05/image.jpg 1000w, https://whackd.in/content/images/size/w1600/2025/05/image.jpg 1600w, https://whackd.in/content/images/size/w2400/2025/05/image.jpg 2400w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">NLWeb: The HTML of the Agentic Web</span></figcaption></figure><p></p><p>Microsoft CTO Kevin Scott drew a compelling parallel, comparing the <strong>Model Context Protocol (MCP)</strong> to <strong>HTTP</strong> and introducing <strong>NLWeb</strong> as a potential <strong>HTML</strong> for the emerging &quot;agentic web.&quot; <strong>NLWeb</strong> is an open-source project designed to simplify the integration of AI interfaces into websites. It allows developers to add a conversational interface, powered by their chosen AI model and data, with minimal code. Every NLWeb instance also functions as an MCP server, enabling websites to make their content readily accessible to AI agents within the MCP ecosystem. The <strong>Model Context Protocol (MCP)</strong> is an open standard designed to standardize the way AI models integrate and share data with external tools, systems, and data sources. It provides a universal interface for AI assistants to read files, execute functions, and handle contextual prompts, facilitating seamless communication between AI and various applications.</p><h3 id="microsoft-discoveryai-for-accelerated-rd">Microsoft Discovery - AI for Accelerated R&amp;D</h3><p>In the realm of research, Microsoft introduced <strong>Microsoft Discovery</strong>, an extensible platform leveraging agentic AI and a graph-based knowledge engine to accelerate the entire research and development process. In a demonstration, Discovery was used to identify and synthesize PASF-free immersion coolants.</p><h3 id="data-integration-and-storage">Data Integration and Storage</h3><p><strong>Cosmos DB is being integrated directly into Foundry</strong> for storing and retrieving conversational history. Cosmos DB is also being brought to Fabric, allowing AI apps to access structured and semi-structured data. For use cases needing low latency and explicit control, <strong>Azure Local</strong> is offered. Azure Local extends Azure to customer-owned infrastructure, enabling local execution of modern and traditional applications across distributed locations. This solution offers a unified management experience on a single control plane.</p><p>Of course, no discussion about cutting-edge technology is complete without acknowledging the ethical considerations. Microsoft&apos;s acknowledgment of providing AI services to the Israeli military and the subsequent internal reviews highlight the complex tightrope walk that tech giants navigate. The commitment to responsible AI development and deployment remains a critical aspect of this journey.</p><p>The recent layoffs within Microsoft, while undoubtedly impacting individuals, are widely seen as a strategic recalibration, a reallocation of resources towards the immense potential of AI. This bold restructuring underscores the company&apos;s conviction in the transformative power of artificial intelligence.</p>]]></content:encoded></item><item><title><![CDATA[Mastering the Transactional Outbox Pattern: A Deep Dive with Code, Pitfalls, and Best Practices]]></title><description><![CDATA[Keep your data consistent and your events reliable, even when things go sideways]]></description><link>https://whackd.in/mastering-the-transactional-outbox-pattern-a-deep-dive-with-code-pitfalls-and-best-practices/</link><guid isPermaLink="false">67e01227bd9bb905dcaaad7c</guid><category><![CDATA[transactional]]></category><category><![CDATA[outbox]]></category><category><![CDATA[pattern]]></category><category><![CDATA[java]]></category><category><![CDATA[design]]></category><category><![CDATA[system]]></category><category><![CDATA[designpattern]]></category><dc:creator><![CDATA[Rohit Pal]]></dc:creator><pubDate>Sun, 23 Mar 2025 14:30:32 GMT</pubDate><media:content url="https://whackd.in/content/images/2025/03/angryimg--1--4.png" medium="image"/><content:encoded><![CDATA[<br><h3 id="introduction-the-problem-of-ghost-events"><strong>Introduction: The Problem of &quot;Ghost Events&quot;</strong></h3><img src="https://whackd.in/content/images/2025/03/angryimg--1--4.png" alt="Mastering the Transactional Outbox Pattern: A Deep Dive with Code, Pitfalls, and Best Practices"><p>Imagine this: Your e-commerce app processes an order, saves it to the database, and tries to publish an&#xA0;<code>OrderCreated</code>&#xA0;event to notify other services. But what if the database commit succeeds, and the event publish&#xA0;<strong>fails</strong>? Suddenly, inventory isn&#x2019;t updated, payment isn&#x2019;t processed, and your users are left hanging.&#xA0;</p><p>This is the&#xA0;<strong>dual-write problem</strong>: ensuring atomicity between database updates and event publishing. Enter the&#xA0;<strong>Transactional Outbox Pattern</strong>&#x2014;a battle-tested solution to keep your system consistent. Let&#x2019;s break it down!<br></p><h3 id="what%E2%80%99s-the-transactional-outbox-pattern"><strong>What&#x2019;s the Transactional Outbox Pattern?</strong></h3><p>The idea is simple but powerful:</p><ol><li><strong>Bundle</strong>&#xA0;your database update and event into a single transaction.</li><li><strong>Store the event</strong>&#xA0;in an &quot;outbox&quot; table in the same database.</li><li><strong>Relay events</strong>&#xA0;to the message broker&#xA0;<em>asynchronously</em>&#xA0;(e.g., via a background worker).</li></ol><p>No more half-baked states! If the transaction commits, the event is guaranteed to eventually publish.</p><h4 id="1-define-the-outbox-table-entity">1. Define the Outbox Table Entity</h4>
<pre><code class="language-java">import jakarta.persistence.*;
import java.time.LocalDateTime;
import java.util.UUID;

@Entity
@Table(name = &quot;outbox_messages&quot;)
public class OutboxMessage {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private UUID id;

    @Column(name = &quot;event_type&quot;, nullable = false)
    private String eventType;

    @Column(name = &quot;payload&quot;, nullable = false, columnDefinition = &quot;TEXT&quot;)
    private String payload;

    @Column(name = &quot;created_at&quot;, nullable = false)
    private LocalDateTime createdAt;

    @Column(name = &quot;processed&quot;, nullable = false)
    private boolean processed;

    // Getters and Setters
}</code></pre><p></p><h4 id="2-save-data-and-event-in-one-transaction">2. Save Data and Event in One Transaction</h4>
<pre><code class="language-java">import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;

@Service
public class OrderService {

    ...

    @Transactional
    public void createOrder(Order order) throws JsonProcessingException {
        // Save the order
        orderRepository.save(order);

        // Create the event payload
        OrderCreatedEvent event = new OrderCreatedEvent(order.getId());
        String payload = objectMapper.writeValueAsString(event);

        // Save the event to the outbox
        OutboxMessage outboxMessage = new OutboxMessage();
        outboxMessage.setId(UUID.randomUUID());
        outboxMessage.setEventType(&quot;OrderCreated&quot;);
        outboxMessage.setPayload(payload);
        outboxMessage.setCreatedAt(LocalDateTime.now());
        outboxMessage.setProcessed(false);

        outboxMessageRepository.save(outboxMessage);
    }
}</code></pre><p></p><h4 id="3-background-worker-to-publish-events">3. Background Worker to Publish Events</h4>
<pre><code class="language-java">import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
import org.springframework.transaction.annotation.Transactional;

import java.util.List;

@Component
public class OutboxPublisher {

    private final OutboxMessageRepository outboxMessageRepository;
    private final MessageBroker broker;

    public OutboxPublisher(OutboxMessageRepository outboxMessageRepository, MessageBroker broker) {
        this.outboxMessageRepository = outboxMessageRepository;
        this.broker = broker;
    }

    @Scheduled(fixedDelay = 1000) // Poll every second
    @Transactional
    public void publishEvents() {
        List&lt;OutboxMessage&gt; messages = outboxMessageRepository.findByProcessedFalse();

        for (OutboxMessage message : messages) {
            try {
                broker.publish(message.getEventType(), message.getPayload());
                message.setProcessed(true);
                outboxMessageRepository.save(message); // Mark as processed
            } catch (Exception ex) {
                // Log and retry later
                ex.printStackTrace();
            }
        }
    }
}</code></pre><p></p><h4 id="4-repository-interfaces">4. Repository Interfaces</h4>
<pre><code class="language-java">import org.springframework.data.jpa.repository.JpaRepository;
import java.util.List;
import java.util.UUID;

public interface OutboxMessageRepository extends JpaRepository&lt;OutboxMessage, UUID&gt; {
    List&lt;OutboxMessage&gt; findByProcessedFalse();
}

public interface OrderRepository extends JpaRepository&lt;Order, UUID&gt; {
    // Custom query methods if needed
}</code></pre><p></p><h4 id="5-event-and-order-classes">5. Event and Order Classes</h4>
<pre><code class="language-java">public class OrderCreatedEvent {
    private UUID orderId;

    public OrderCreatedEvent(UUID orderId) {
        this.orderId = orderId;
    }

    // Getters and Setters
    public UUID getOrderId() { return orderId; }
    public void setOrderId(UUID orderId) { this.orderId = orderId; }
}

@Entity
@Table(name = &quot;orders&quot;)
public class Order {
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private UUID id;

    // Other fields
    private String productName;
    private int quantity;

    // Getters and Setters
}</code></pre><p></p><h4 id="6-message-broker-interface">6. Message Broker Interface</h4>
<pre><code class="language-java">public interface MessageBroker {
    void publish(String eventType, String payload);
}</code></pre><p></p><h3 id></h3><h4 id="key-take-aways-in-implementation">Key take aways in implementation</h4>
<ol><li><strong>Transactions</strong>: Use&#xA0;<code>@Transactional</code>&#xA0;to ensure atomicity between database operations.</li><li><strong>Polling</strong>: The&#xA0;<code>@Scheduled</code>&#xA0;annotation simplifies background processing. For production, consider&#xA0;<strong>Change Data Capture (CDC)</strong>&#xA0;tools like Debezium.</li><li><strong>Idempotency</strong>: Ensure your&#xA0;<code>MessageBroker</code>&#xA0;implementation handles duplicate events gracefully.</li></ol><h3 id="-1"></h3><h4 id="best-practices">Best Practices</h4>
<ol><li><strong>Idempotency is King</strong>: Design your event handlers to handle duplicates gracefully.</li><li><strong>Order Matters</strong>: Use&#xA0;<code>CreatedAt</code>&#xA0;timestamps or sequence IDs to ensure events are processed in order.</li><li><strong>Monitor the Outbox</strong>: Alert if unprocessed events pile up&#x2014;it could indicate a broker outage. Your broker if is a queue then add alerting on Dead-Letter Queue.</li><li><strong>Use JSON Schema</strong>: Validate event payloads to avoid malformed data.</li></ol><h3 id="-2"></h3><h4 id="debezium-alternatives">Debezium Alternatives</h4>
<ul><li><strong>AWS DMS (Database Migration Service): </strong>A managed service for real-time CDC and database replication. Best for AWS users who want a no-maintenance solution.</li><li><strong>Maxwell&#x2019;s Daemon</strong>: A lightweight CDC tool for MySQL. Great for simple, MySQL-only use cases.</li><li><strong>Kafka Connect JDBC Source Connector</strong>: Polls databases for changes and streams them to Kafka. Ideal for non-real-time, polling-based systems.</li><li><strong>Oracle GoldenGate</strong>: Enterprise-grade CDC tool for Oracle databases. Perfect for Oracle users needing high-performance replication.</li><li><strong>Bottled Water (for PostgreSQL)</strong>: A PostgreSQL-specific CDC tool that streams changes to Kafka. Best for PostgreSQL users looking for a lightweight solution.</li></ul><h3 id="-3"></h3><h4 id="conclusion-consistency-wins">Conclusion: Consistency Wins!</h4>
<p>The Transactional Outbox Pattern is your ally in the quest for&#xA0;<strong>reliable distributed systems</strong>. By combining atomic database writes with asynchronous event propagation, you eliminate ghost events and sleep better at night. &#x1F634;</p><p></p>]]></content:encoded></item><item><title><![CDATA[Build your own Dependency Injection Framework]]></title><description><![CDATA[Dependency Injection is very useful for most projects and allows us to create our dependencies easily and makes the development process seamless]]></description><link>https://whackd.in/build-your-own-dependecy-injection-framework/</link><guid isPermaLink="false">654181d393f196d8458f254a</guid><category><![CDATA[java]]></category><category><![CDATA[dependency]]></category><category><![CDATA[injection]]></category><category><![CDATA[spring]]></category><category><![CDATA[welcome]]></category><dc:creator><![CDATA[Rohit Pal]]></dc:creator><pubDate>Wed, 01 Nov 2023 11:59:01 GMT</pubDate><media:content url="https://whackd.in/content/images/2023/11/Dependency-Injection--1-.png" medium="image"/><content:encoded><![CDATA[<img src="https://whackd.in/content/images/2023/11/Dependency-Injection--1-.png" alt="Build your own Dependency Injection Framework"><p><a href="https://en.wikipedia.org/wiki/Dependency_injection?ref=whackd.in#:~:text=Article%20Talk,opposed%20to%20creating%20them%20internally." rel="noreferrer">Dependency Injection</a> is very useful for most projects and allows us to create our dependencies easily and makes the development process seamless.</p><p>Join us, where we will be creating a Dependency Injection framework from scratch with the least amount of libraries possible. We would be exploring Java&apos;s Reflection and creating our dependency resolution strategy to resolve bean (object) resolution. </p><p>Hope this exploration and building framework will help you understand the internals of reflection and how dependency injection works in Spring Framework.</p><p>This small container can be easily used to create Dependency Injection in. your own projects, embeddable in any project, and if you are writing a lean library too, this journey would be worthwhile.</p>]]></content:encoded></item><item><title><![CDATA[Custom Feign Client Builder library in Spring Boot]]></title><description><![CDATA[Creating a custom spring module to make HTTP client library using OpenFeign

]]></description><link>https://whackd.in/custom-feign-client-builder-in-spring-boot-gotchas/</link><guid isPermaLink="false">646f0bd799f4232de50b13b3</guid><category><![CDATA[spring]]></category><category><![CDATA[openfeign]]></category><category><![CDATA[feign]]></category><category><![CDATA[annotation]]></category><category><![CDATA[custom]]></category><category><![CDATA[builder]]></category><category><![CDATA[spring boot]]></category><category><![CDATA[retry]]></category><category><![CDATA[library]]></category><category><![CDATA[bean]]></category><category><![CDATA[register]]></category><category><![CDATA[programming]]></category><category><![CDATA[java]]></category><dc:creator><![CDATA[Rohit Pal]]></dc:creator><pubDate>Sat, 27 May 2023 13:54:52 GMT</pubDate><media:content url="https://whackd.in/content/images/2023/09/spring-boot-logo--1-.png" medium="image"/><content:encoded><![CDATA[<img src="https://whackd.in/content/images/2023/09/spring-boot-logo--1-.png" alt="Custom Feign Client Builder library in Spring Boot"><p></p><p>Feign is a declarative client which makes it easy to create an HTTP client by defining only Java Interfaces and endpoints as interface methods. Simple and straightforward easy-to-use client can be used as OpenFeign when you want to use it with Spring Boot.</p><p>But, there would be cases in your project where you might want to reuse the client in multiple services or modules such that you could maintain the REST service endpoints by versioning them via the release version of your library.</p><p>Feign provides an easy to either create a config-based initiation supporting auto-enabled configuration provided by Spring via <code>@EnableFeignClients</code> annotation. When using OpenFeign with Spring configuration can be done via the prefix:</p><!--kg-card-begin: markdown--><pre><code>feign.client.config.&lt;client-name&gt;.*
</code></pre>
<!--kg-card-end: markdown--><p>An example config <code>application.yaml</code> would look something like </p><!--kg-card-begin: html--><pre><code class="language-yaml">feign:
  client:
    config:
      feignName:
        connectTimeout: 5000
        readTimeout: 5000
        loggerLevel: full
        errorDecoder: com.example.SimpleErrorDecoder
        retryer: com.example.SimpleRetryer
        requestInterceptors:
          - com.example.FooRequestInterceptor
          - com.example.BarRequestInterceptor
        decode404: false
        encoder: com.example.SimpleEncoder
        decoder: com.example.SimpleDecoder
        contract: com.example.SimpleContract
</code>
</pre><!--kg-card-end: html--><p>But, if you want to customize, Feign Builder can be used to construct Feign Clients beans as well. In order to create multiple custom beans in Spring, we could use one of &#xA0;<code><a href="https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/beans/factory/support/BeanDefinitionRegistryPostProcessor.html?ref=whackd.in">BeanDefinitionRegistryPostProcessor</a></code> or <code><a href="https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/context/annotation/ImportBeanDefinitionRegistrar.html?ref=whackd.in">ImportBeanDefinitionRegistrar</a></code> and implement the required methods and annotate your class as <code>@Configuration</code></p><p>Both of these configurations would let you use class <code><strong>BeanDefinitionRegistryPostProcessor</strong></code><strong> </strong>which lets you register a bean as</p><!--kg-card-begin: html--><pre><code class="language-java">registry.registerBeanDefinition(&quot;<clientname>&quot;, createBeanDefination(feignClient));


private BeanDefinition createBeanDefination(Object client) {
    var definition = new RootBeanDefinition();
    definition.setBeanClass(client.getClass());
    definition.setInstanceSupplier(() -&gt; client);

    return definition;
}</clientname></code></pre><!--kg-card-end: html--><p>When using <strong><code>BeanDefinitionRegistryPostProcessor</code> </strong>your configuration class would look something like this:</p><!--kg-card-begin: html--><pre><code class="language-java">@Configuration
public class AppConfig implements BeanDefinitionRegistryPostProcessor {
 
    private BeanDefinition createBeanDefination(Object client) {
        var definition = new RootBeanDefinition();
        definition.setBeanClass(client.getClass());
        definition.setInstanceSupplier(() -&gt; client);
 
        return definition;
    }
 
    @Override
    public void postProcessBeanDefinitionRegistry(BeanDefinitionRegistry registry) throws BeansException {
        // init builder
        var feignClient = Feign.builder().build();
 
        registry.registerBeanDefinition(&quot;<clientname>&quot;, createBeanDefination(feignClient));
    }
 
    @Override
    public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) throws BeansException {
 
    }
}</clientname></code></pre><!--kg-card-end: html--><p>The <strong><code>ImportBeanDefinitionRegistrar</code></strong> would let you get metadata information from your Custom Enable Annotation in Spring. So, for example, if you want to initiate your clients via custom annotation and want to get annotation metadata we can do so like. </p><!--kg-card-begin: html--><pre><code class="language-java">@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
public @interface EnableCustomFeignClients {
    Class<?>[] clients() default {};
}

@Override
public void registerBeanDefinitions(AnnotationMetadata annotationMetadata, BeanDefinitionRegistry registry) {
    Map<string, object> attrs = annotationMetadata.getAnnotationAttributes(EnableCustomFeignClients.class.getName(),
            true);
    Class<?>[] clients = attrs == null ? null : (Class[])attrs.get(&quot;clients&quot;);

    // init specific client classes

    // init builder
}</string,></code></pre><!--kg-card-end: html--><p>If you are writing a library, it&apos;s much better to let the configuration command the builder. The configuration mentioned in your <code>application.yml</code> with your custom prefix should govern which clients to initiate. This allows users of the library to configure only those clients which are required to be and only those would get initiated with your custom initiation configuration.</p><p>You can write your custom configuration with your prefix based on your needs.</p><!--kg-card-begin: html--><pre><code class="language-yaml">mycompany.client.name.configX=1000
...</code></pre><!--kg-card-end: html--><p><br><strong>Problem: Cannot inject your Custom Configuration properties.</strong></p><p>Since we are doing pre-initiation processing, the <code>@ConfigurationProperties</code> does not work here for us. The way to do this is to use <strong><a href="https://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/context/properties/bind/Binder.html?ref=whackd.in"><em>Binder</em></a></strong> which can bind environment properties from <code>application.yml</code> configuration with prefixes to your custom DTOs.</p><p>Also in order to get Environment Object, we can make our <code>@Configuration</code> class <strong><code>EnvoirnmentAware</code></strong>.</p><!--kg-card-begin: html--><pre><code class="language-java">@Configuration
public class AppConfig implements BeanDefinitionRegistryPostProcessor, EnvironmentAware {

    private Environment environment;
    private Map<string, myconfig> config;

    @Override
    public void setEnvironment(Environment environment) {
        this.environment = environment;
    }

    @Override
    public void postProcessBeanDefinitionRegistry(BeanDefinitionRegistry registry) throws BeansException {
        // init binder
        this.config = Binder.get(this.environment)
                .bind(&quot;myconfig.web.client&quot;, Bindable.mapOf(String.class, MyConfig.class))
                .orElseThrow(IllegalStateException::new);
    
        // init builders
        ...
    }

}</string,></code></pre><!--kg-card-end: html--><p>Now, let&apos;s talk about registering your <code>@FeignClient</code> annotated classes in your custom feign client registration configuration class.</p><p>To create the client, you can configure a few basic configurations listed here.</p><!--kg-card-begin: html--><pre><code class="language-java">var feignClient = Feign.builder()
                .contract()
                .encoder()
                .decoder()
                .errorDecoder()
                .retryer()
                .target(<class>, &quot;url&quot;);

registry.registerBeanDefinition(&quot;clientName&quot;, createBeanDefination(feignClient));</class></code></pre><!--kg-card-end: html--><!--kg-card-begin: markdown--><ul>
<li>Contract</li>
<li>Encoder</li>
<li>Decoder</li>
<li>ErrorDecoder</li>
<li>Retryer</li>
<li>Target</li>
</ul>
<!--kg-card-end: markdown--><!--kg-card-begin: html--><br><!--kg-card-end: html--><h2 id="contract">Contract</h2><p>When using Spring, we can use class <code>SpringMvcContract</code> class which allows Spring to use its own MVC web annotations like <code>@RequestMapping</code>, <code>@GetMapping</code>, <code>@PostMapping</code> etc. on Feign client methods.</p><!--kg-card-begin: html--><pre><code class="language-java">.contract(new SpringMvcContract())</code></pre><!--kg-card-end: html--><p>However, you might end up in a situation where you want your client&apos;s endpoints uri too needs to be configurable. For this Spring already have placeholder resolution in place but this would not work directly out of the box when creating a custom client.</p><!--kg-card-begin: html--><pre><code class="language-java">@FeignClient(name = &quot;pet&quot;)
interface PetClient {

    @GetMapping(value = &quot;${pet.getpets.url}&quot;)
    JsonNode getPets();
}</code></pre><!--kg-card-end: html--><p>In order to resolve the placeholder like <code>${pet.getpets.url}</code> we would have to set <code>ResourceLoader</code> it to our <code>SpringMvcContract</code>. We can easily do that by implementing ResourceLoaderAware class like this and then setting the <code>ResourceLoader</code> in our <code>SpringMvcContract.</code> &#xA0;</p><!--kg-card-begin: html--><pre><code class="language-java">@Configuration
public class AppConfig implements BeanDefinitionRegistryPostProcessor, ResourceLoaderAware {

    private ResourceLoader resourceLoader;

    @Override
    public void setResourceLoader(ResourceLoader resourceLoader) {
        this.resourceLoader = resourceLoader;
    }

    @Bean
    Contract springMvcContract() {
        var contract = new SpringMvcContract();
        contract.setResourceLoader(this.resourceLoader);

        return contract;
    }

    @Override
    public void postProcessBeanDefinitionRegistry(BeanDefinitionRegistry registry) throws BeansException {
        // init builder
        var feignClient = Feign.builder()
                .contract(springMvcContract())
                ...
    }
}</code></pre><!--kg-card-end: html--><h2 id="encoderdecoder">Encoder/Decoder</h2><p>Based on your client config you can pick and set the required Encoder and Decoder for the client. For eg. if your content is <code>application/json</code> type, we can register the encoder and decoder as:</p><!--kg-card-begin: html--><pre><code class="language-java">@Bean
Encoder feignEncoder() {
    var jsonMessageConverters = new MappingJackson2HttpMessageConverter(new ObjectMapper());
    return new SpringEncoder(() -&gt; new HttpMessageConverters(jsonMessageConverters));
}

@Bean
Decoder feignDecoder() {
    var jsonMessageConverters = new MappingJackson2HttpMessageConverter(new ObjectMapper());
    return new ResponseEntityDecoder(new SpringDecoder(() -&gt; new HttpMessageConverters(jsonMessageConverters)));
}

@Override
public void postProcessBeanDefinitionRegistry(BeanDefinitionRegistry registry) throws BeansException {
    // init builder
    var feignClient = Feign.builder()
            .contract(springMvcContract())
            .encoder(feignEncoder())
            .decoder(feignDecoder())
            ...
}</code></pre><!--kg-card-end: html--><h2 id="errordecoder">ErrorDecoder</h2><p><code>ErrorDecoder</code> is where we can write about custom exception throwing for a handler method and then can handle it later in the code. However, when writing a library we might want the most generic way to handle exceptions. By default, Feign would throw <code>FeignException</code> whenever there is no <code>2XX</code> response code.<br></p><p>Along with throwing exceptions, Feign also allows retry of exception marking via <code>ErrorDecoder</code> . If a method throws <code>RetryException</code> then instead of throwing it, first it invokes the <em><strong>Retryer</strong></em>. A default implementation of the error decoder is also available which can be directly hooked into your custom client builder.</p><h2 id="retryer-with-errordecoder">Retryer with ErrorDecoder</h2><p>Feign has given an interface to handle in case you would want to retry in specific cases. A simple custom <code>retryer</code> would look like this:</p><!--kg-card-begin: html--><pre><code class="language-java">static class Retryer implements feign.Retryer {

    @Override
    public void continueOrPropagate(RetryableException e) {
        
    }

    @Override
    public feign.Retryer clone() {
        return null;
    }
}</code></pre><!--kg-card-end: html--><p>Every single method invocation is wrapped around <code>InvocationHandler</code> where the client&apos;s Retry info is used to check they it is registered and do we want to retry the handler again. The method <code>continueOrProgagate</code> lets us hook logic if we want to continue to retry.</p><p>However, you might want to retry only in certain cases or probably on certain methods only in case of specific configured error codes. I did not find any direct way to configure retry per method inside the client feign client.</p><p><strong>Problem: No feature to enable Retry on a single client method based on configuration by default.</strong></p><p>In order to resolve that, I used <code>ErrorDecoder</code> with <strong>Custom Retryer</strong> as follows:</p><p>First, implement a custom exception that extends feign provided exception class <code>RetryException</code>. Here, we would want to capture information about the method by which the exception has occurred.</p><p>But before this, let us customize our configuration to allow per-method retry configuration. The property <code>retryMethod</code> would have this configuration per client and then per method. </p><!--kg-card-begin: html--><pre><code class="language-java">static class MyConfig {
    private String url;
    private Logger.Level loggerLevel;
    private Integer connectTimeout;
    private Integer readTimeout;
    private RetryConfig retry; // client level retryconfig
    private Map&lt;String, RetryConfig&gt; retryMethod; // methodname-key : retryconfig
}

static class RetryConfig {
    private Integer maxAttempts;
    private Long period;
    private Long maxPeriod;
    private List&lt;Integer&gt; retryCodes = List.of(500, 502, ...);
}</code></pre><!--kg-card-end: html--><p>An example of the config would look like</p><!--kg-card-begin: markdown--><pre><code>myconfig.web.client.petClient.getPets.maxAttempts=4
</code></pre>
<!--kg-card-end: markdown--><p>Extending the <code>RetryException</code> class:</p><!--kg-card-begin: html--><pre><code class="language-java">static class MyRetryException extends RetryableException {

    private final RetryConfig retryConfig;
    public MyRetryException(RetryConfig retryConfig, int status, Request.HttpMethod httpMethod, Request request) {
        super(status, &quot;&quot;, httpMethod, null, request);
        this.retryConfig = retryConfig;
    }
}</code></pre><!--kg-card-end: html--><p>Here, we have captured information about the retry config, which we expect from our Custom Error Decoder <code>ErrorDecoder</code>, &#xA0;which is declared as:</p><!--kg-card-begin: html--><pre><code class="language-java">static class MyErrorDecoder implements ErrorDecoder {

    private final ErrorDecoder decoder = new ErrorDecoder.Default();
    private final RetryConfig clientRetryConfig;
    private final Map<string, retryconfig> methodRetryConfig;

    public MyErrorDecoder(RetryConfig clientRetryConfig, Map<string, retryconfig> methodRetryConfig) {
        this.clientRetryConfig = clientRetryConfig;
        this.methodRetryConfig = methodRetryConfig;
    }

    @Override
    public Exception decode(String s, Response response) {
        // get method name
        var methodName = response.request().requestTemplate().methodMetadata().method().getName();

        RetryConfig retryConfig = null;
        // if methodName config found in any method or client level config
        // retryConfig = this.methodRetryConfig.get(methodName);

        // check if retry status code
        var status = response.status();
        if (retryConfig != null &amp;&amp; retryConfig.retryCodes.contains(status)) {
            var httpMethod = response.request().httpMethod();
            return new MyRetryException(retryConfig, status, httpMethod, response.request());
        }

        return decoder.decode(s, response);
    }
}</string,></string,></code></pre><!--kg-card-end: html--><p>Now, based on our global or method-level retry config, we can throw our custom <code>RetryException</code> called <code>MyRetryException</code>.</p><p>Customizing the <code>Retryer</code> would be a little hacky. We would check if thrown an exception if that is <code>instanceof</code> our custom <code>MyRetryException</code>, if yes we would initiate the internally created instance of the <strong>Default Retryer</strong> provided by Feign. The Default implementation of Retryer by default uses <u>backoff as exponential which is the preferred way in microservices</u>.</p><!--kg-card-begin: html--><pre><code class="language-java">static class MyRetryer implements feign.Retryer {

    private Retryer retryer;
    private Retryer initAndGet(RetryConfig retryConfig) {
        if (retryer == null) {
            retryer = new feign.Retryer.Default(
                    retryConfig.period,
                    retryConfig.maxPeriod,
                    retryConfig.maxAttempts
            );
        }

        return retryer;
    }

    @Override
    public void continueOrPropagate(RetryableException e) {
        if (e instanceof MyRetryException) {
            MyRetryException retryException = (MyRetryException) e;
            initAndGet(retryException.retryConfig).continueOrPropagate(e);
        }

        throw e;
    }

    @Override
    public feign.Retryer clone() {
        return new MyRetryer();
    }
}</code></pre><!--kg-card-end: html--><p><br>Now, that all components of your custom feign builder are ready, we can start registering the client. We can loop through the configured clients and can initiate the builder and then we can register all candidate client beans.</p><!--kg-card-begin: html--><pre><code class="language-java">@Override
public void postProcessBeanDefinitionRegistry(BeanDefinitionRegistry registry) throws BeansException {
    // init binder
    this.config = Binder.get(this.environment)
            .bind(&quot;myconfig.web.client&quot;, Bindable.mapOf(String.class, MyConfig.class))
            .orElseThrow(IllegalStateException::new);

    // get classes annotated with @FeignClient
    var annotatedTypeScanner = new AnnotatedTypeScanner(FeignClient.class);
    var candidateClients = annotatedTypeScanner.findTypes(&quot;...base.package.lib.client&quot;);

    candidateClients.forEach(candidateClient -&gt; {
        FeignClient annotation = candidateClient.getAnnotation(FeignClient.class);
        if (config.containsKey(annotation.name())) { // check if client name matches config name
            var clientConfig = config.get(annotation.name());
            var feignClient = Feign.builder()
                    .contract(springMvcContract())
                    .encoder(feignEncoder())
                    .decoder(feignDecoder())
                    .options(new Request.Options(clientConfig.connectTimeout, clientConfig.readTimeout))
                    .errorDecoder(new MyErrorDecoder(clientConfig.retry, clientConfig.retryMethod))
                    .retryer(new MyRetryer())
                    .target(candidateClient, clientConfig.url);

            var clientName = String.format(&quot;%sClient&quot;);
            registry.registerBeanDefinition(clientName, createBeanDefination(feignClient));
        }
    });
}</code></pre><!--kg-card-end: html--><p>That is it. Now once this setup is wrapped as a lib, we can just use <code>@EnableCustomFeignClients</code> along with relevant connection configuration in <code>application.yml</code> to use it in any Spring Boot application.</p>]]></content:encoded></item><item><title><![CDATA[Accessing S3 content using CloudFront Signed URL]]></title><description><![CDATA[<p>In this post, we will configure AWS CloudFront distribution to provide restricted access to S3 bucket private contents so that objects can only be accessed through CloudFront Signed URL.</p><p>A signed URL includes additional information such as expiration date, that provide user applications to have better control over access to</p>]]></description><link>https://whackd.in/accessing-s3-content-using-cloudfront-signed-url/</link><guid isPermaLink="false">630876e77e780403f24812c2</guid><category><![CDATA[AWS]]></category><category><![CDATA[S3]]></category><category><![CDATA[CloudFront]]></category><category><![CDATA[Cloud]]></category><dc:creator><![CDATA[Prabhat Agarwal]]></dc:creator><pubDate>Fri, 26 Aug 2022 11:08:40 GMT</pubDate><media:content url="https://whackd.in/content/images/2023/09/Untitled-design.png" medium="image"/><content:encoded><![CDATA[<img src="https://whackd.in/content/images/2023/09/Untitled-design.png" alt="Accessing S3 content using CloudFront Signed URL"><p>In this post, we will configure AWS CloudFront distribution to provide restricted access to S3 bucket private contents so that objects can only be accessed through CloudFront Signed URL.</p><p>A signed URL includes additional information such as expiration date, that provide user applications to have better control over access to the content.</p><h3 id="prerequisite">Prerequisite</h3><ol><li>AWS account with console access.</li><li>Any Java IDE to generate signed URL code</li><li>OpenSSL utility to generate public/Private Key</li></ol><p>Now let&#x2019;s start with the steps for the workflow.</p><h3 id="create-s3-bucket">Create S3 Bucket</h3><p>Login to AWS console search for S3 service or go to https://s3.console.aws.amazon.com</p><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2022/08/S3-console.png" class="kg-image" alt="Accessing S3 content using CloudFront Signed URL" loading="lazy" width="752" height="236" srcset="https://whackd.in/content/images/size/w600/2022/08/S3-console.png 600w, https://whackd.in/content/images/2022/08/S3-console.png 752w" sizes="(min-width: 720px) 720px"></figure><p>Click on S3 that leads to S3 console and follow the <strong>Create bucket</strong> button that lands to create bucket form.</p><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2022/08/s3_console_2.png" class="kg-image" alt="Accessing S3 content using CloudFront Signed URL" loading="lazy" width="602" height="238" srcset="https://whackd.in/content/images/size/w600/2022/08/s3_console_2.png 600w, https://whackd.in/content/images/2022/08/s3_console_2.png 602w"></figure><p>Choose a globally unique and valid name for the bucket, choose AWS region for which logged in user have persimmon to create bucket. Make sure the &#x201C;<strong>Block <em>all</em> public access</strong>&#x201D; checkbox is checked. Left rest of the settings to default and click on create bucket.</p><h3 id="create-a-key-pair">Create a key pair</h3><p>In following steps, OpenSSL is used to create key pair to make a trusted key pair group for CloudFront.</p><p>There are other tools as well to create public/private key pairs.</p><ul><li>Use the following command to generate RSA key pair and save it in a file named <strong>private_key.pem</strong>.</li></ul><pre><code>openssl genrsa -out private_key.pem 2048</code></pre><ul><li>The following command will extract the public key from the generated file and save it in <strong>public_key.pem</strong>.</li></ul><pre><code>openssl rsa -pubout -in private_key.pem -out public_key.pem</code></pre><ul><li>Later this post will use java to generate signed URLs so the private key pair file cannot be used directly, instead PEM to DER conversion is required. Use the following command to do so.</li></ul><pre><code>openssl pkcs8 -topk8 -nocrypt -in private_key.pem -inform PEM -out private_key.der -outform DER</code></pre><p>Keep aside the files created as these will be required later while creating CloudFront Distribution to be used as signers to create signed URLs.</p><h3 id="creating-the-signers-in-cloudfront">Creating the signers in CloudFront</h3><p>Use the following steps to create a signer in cloud front,</p><ul><li>Open CloudFront Console and from the left hamburger menu, navigate to public keys under key management section.</li></ul><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2022/08/key_management-83.png" class="kg-image" alt="Accessing S3 content using CloudFront Signed URL" loading="lazy" width="329" height="117"></figure><ul><li>Go to Public keys and create one by providing appropriate name and paste the contents of <strong><em>public_key.pem</em> </strong>file in Key section.</li><li>In the same key management section, go to key groups and create a key group with the public key created in previous step.</li></ul><p>Now let&#x2019;s move on and create cloud front distribution.</p><h3 id="create-cloudfront-distribution">Create CloudFront Distribution</h3><p>Search for CloudFront in AWS console.</p><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2022/08/cloud_front_console.png" class="kg-image" alt="Accessing S3 content using CloudFront Signed URL" loading="lazy" width="752" height="213" srcset="https://whackd.in/content/images/size/w600/2022/08/cloud_front_console.png 600w, https://whackd.in/content/images/2022/08/cloud_front_console.png 752w" sizes="(min-width: 720px) 720px"></figure><p>Open CloudFront console and click on &#x201C;<strong>Create a CloudFront distribution</strong>&#x201D; that opens the create CloudFront distribution form.</p><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2022/08/cloud_front_form_1.png" class="kg-image" alt="Accessing S3 content using CloudFront Signed URL" loading="lazy" width="602" height="166" srcset="https://whackd.in/content/images/size/w600/2022/08/cloud_front_form_1.png 600w, https://whackd.in/content/images/2022/08/cloud_front_form_1.png 602w"></figure><p>From the origin dropdown, select the bucket created in previous step as Origin domain.</p><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2022/08/cloud_front_form_2.png" class="kg-image" alt="Accessing S3 content using CloudFront Signed URL" loading="lazy" width="602" height="306" srcset="https://whackd.in/content/images/size/w600/2022/08/cloud_front_form_2.png 600w, https://whackd.in/content/images/2022/08/cloud_front_form_2.png 602w"></figure><p>For <strong><em>Origin access</em></strong>, choose from options given,</p><p><strong>1. Origin access control settings</strong>, which is recommended where you need to update the bucket policy, to allow access to IAM service principal, provided when distribution is created, <strong><em>or</em></strong></p><p><strong>2. Legacy access identities</strong> where an option to update the bucket policy is provided. This will update the S3 bucket policy so that the bucket can be accessible using the cloud front distribution.</p><p>Choose the recommended one and create control setting and use the same.</p><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2022/08/cloud_front_form_3.png" class="kg-image" alt="Accessing S3 content using CloudFront Signed URL" loading="lazy" width="602" height="331" srcset="https://whackd.in/content/images/size/w600/2022/08/cloud_front_form_3.png 600w, https://whackd.in/content/images/2022/08/cloud_front_form_3.png 602w"></figure><p>Choose <strong><em>Allowed HTTP methods</em></strong>as required.</p><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2022/08/cloud_front_form_4.png" class="kg-image" alt="Accessing S3 content using CloudFront Signed URL" loading="lazy" width="602" height="201" srcset="https://whackd.in/content/images/size/w600/2022/08/cloud_front_form_4.png 600w, https://whackd.in/content/images/2022/08/cloud_front_form_4.png 602w"></figure><p>In Restrict viewer access, select yes and keep Trusted authorization type as Trusted key groups. In Add key groups section, select the key group from the drop down created in previously.</p><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2022/08/cloud_front_form_5.png" class="kg-image" alt="Accessing S3 content using CloudFront Signed URL" loading="lazy" width="602" height="256" srcset="https://whackd.in/content/images/size/w600/2022/08/cloud_front_form_5.png 600w, https://whackd.in/content/images/2022/08/cloud_front_form_5.png 602w"></figure><p>This will bind the key pair created in previous section so that one must need to create CloudFront Signed URL to access S3 Bucket content associated with distribution.</p><p>In cache and Origin Request, select <strong><em>CachingDisabled</em></strong> so that any update in s3 object will immediately reflected while fetching it.</p><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2022/08/cloud_front_form_.6.png" class="kg-image" alt="Accessing S3 content using CloudFront Signed URL" loading="lazy" width="602" height="209" srcset="https://whackd.in/content/images/size/w600/2022/08/cloud_front_form_.6.png 600w, https://whackd.in/content/images/2022/08/cloud_front_form_.6.png 602w"></figure><p>Now create CloudFront distribution keeping other options as default.</p><p>On success distribution detail page will open where on top of the page, copy the policy statement.</p><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2022/08/cloud_front_success.png" class="kg-image" alt="Accessing S3 content using CloudFront Signed URL" loading="lazy" width="602" height="132" srcset="https://whackd.in/content/images/size/w600/2022/08/cloud_front_success.png 600w, https://whackd.in/content/images/2022/08/cloud_front_success.png 602w"></figure><h3 id="update-access-policy-in-s3-bucket">Update access policy in S3 bucket</h3><p>Go to S3 console (https://s3.console.aws.amazon.com) and select the same bucket used in cloud front distribution as origin.</p><p>Move to bucket policy section under Permission tab, edit the bucket policy and paste the contents of bucket policy copied.</p><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2022/08/bucket_policy_1-1.png" class="kg-image" alt="Accessing S3 content using CloudFront Signed URL" loading="lazy" width="602" height="432" srcset="https://whackd.in/content/images/size/w600/2022/08/bucket_policy_1-1.png 600w, https://whackd.in/content/images/2022/08/bucket_policy_1-1.png 602w"></figure><p>For this policy, the allowed action is restricted to read the objects, that can be modified as per requirement. Let&#x2019;s update the action and add permission to write objects as well. To do so, replace the action line with following.</p><blockquote>&quot;Action&quot;: [&quot;s3:GetObject&quot;,&quot;s3:PutObject&quot;],</blockquote><p>The resulting statement will look like mentioned below.</p><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2022/08/bucket_policy_2.png" class="kg-image" alt="Accessing S3 content using CloudFront Signed URL" loading="lazy" width="602" height="334" srcset="https://whackd.in/content/images/size/w600/2022/08/bucket_policy_2.png 600w, https://whackd.in/content/images/2022/08/bucket_policy_2.png 602w"></figure><p>Save the changes.</p><p>With this configuration part is completed and now let&#x2019;s move on to create cloud front signed URL.</p><h3 id="generating-cloudfront-signed-url">Generating CloudFront Signed URL</h3><p>Steps and code to generate cloud front signed URL is explained below.</p><p>Create spring boot gradle project and add the below dependencies to it.</p><blockquote>implementation &apos;com.amazonaws:aws-java-sdk-cloudfront:1.12.283&apos;</blockquote><p>Here is the code for generating cloud front signed URL.</p><pre><code class="language-java">import java.io.File;
import java.io.IOException;
import java.security.spec.InvalidKeySpecException;
import java.util.Date;

import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

import com.amazonaws.services.cloudfront.CloudFrontUrlSigner;
import com.amazonaws.services.cloudfront.util.SignerUtils;

@SpringBootApplication
public class CloudfrontsignedurlApplication implements CommandLineRunner {

	public static void main(String[] args) {
		SpringApplication.run(CloudfrontsignedurlApplication.class, args);
	}

	@Override
	public void run(String... args) throws Exception {
		String cloudFrontKeyPairId = &quot;&lt;public_key_id&gt;&quot;; // public key id created in cloud front key management section.
		String distributionDomain = &quot;&lt;cloud_front_distribution_name&gt;&quot;; // cloud front distribution domain name.
		String key = &quot;cloudfrontsignedurl/objetc1.txt&quot;; // S3 bucket object path

		Date expirationDate = new Date(System.currentTimeMillis() + 7200000); // Token will be valid for 2 hour
		try {
			File cloudFrontPrivateKeyFile = generateCloudFrontPrivateKeyFile();
			String signedUrl = CloudFrontUrlSigner.getSignedURLWithCannedPolicy(SignerUtils.Protocol.https,
					distributionDomain, cloudFrontPrivateKeyFile, key, cloudFrontKeyPairId, expirationDate);
			System.out.println(signedUrl);

		} catch (IOException | InvalidKeySpecException exception) {
			throw new Exception(exception.getMessage());
		}
	}

	private File generateCloudFrontPrivateKeyFile() {
		File file = new File(&quot;&lt;private_key_file&gt;&quot;); // Path to public/private key pair file.
		return file;
	}
}
</code></pre><p>The generated signed URL will look as mentioned below.</p><blockquote><a href="#">https://d26doxj2i1y97q.cloudfront.net/cloudfrontsignedurl/objetc1.txt?Expires=1661499384&amp;Signature=LabCbO27wL1-ErAEwYU9CBGh1pVdmRY2oQ94QfQP9cGi4vQTNgo7xT3ctbr6lAolcH5AZEe-I79s~spEA6VCnRUIstsvDhLoN4spJHrQxlecapxKK7P0J9U6kXL8V2ucDgwrJmFfdFWpipeGkgTVgKJ~s53Unp76YrTJODYnX-ZZc3RuQ4go5oBYhXU2hRHKlVusV3llhlOyfN58FxytzAZkegECRj6LR6m0WzRI-guCPqHCO7Gir~Ls5ewCw-TZpAyMca-LjKAeTGd~KS4etFgr5Tbt3UrGiDXJVoDkCcc-Z1cu9xu34s9ZQHqSD-t7AHdCszv6sitRcFK9PSJQyg__&amp;Key-Pair-Id=K1HEUAHR3KG1E3</a></blockquote><p>This URL can be used to upload and download the object to S3 bucket.</p><p>Note** One also need to set the <strong>Access Key ID</strong> and <strong>Secret Access Key </strong>for programmatic access.</p><!--kg-card-begin: markdown--><p>Complete code for generating the signed URL can be found at <a href="https://github.com/aprabhat/cloudfrontsignedurl?ref=whackd.in" target="_blank">Github</a>.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Schedulers and the Need for Concurrency]]></title><description><![CDATA[To get best computational efficiency from a multicore processor core kernels of operating systems uses Kernel-Level threads to provide concurrency]]></description><link>https://whackd.in/need-for-concurrency-language-schedulers/</link><guid isPermaLink="false">62e191ecbf4e0b1029dee434</guid><category><![CDATA[java]]></category><category><![CDATA[golang]]></category><category><![CDATA[schedulers]]></category><category><![CDATA[rxjava]]></category><category><![CDATA[threads]]></category><category><![CDATA[concurrency]]></category><category><![CDATA[kotlin]]></category><category><![CDATA[goroutine]]></category><category><![CDATA[programming]]></category><dc:creator><![CDATA[Rohit Pal]]></dc:creator><pubDate>Thu, 28 Jul 2022 12:02:36 GMT</pubDate><media:content url="https://whackd.in/content/images/2023/05/Two-bullet-trai-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://whackd.in/content/images/2023/05/Two-bullet-trai-1.png" alt="Schedulers and the Need for Concurrency"><p>To get best computational efficiency from a multicore processor core, kernels of operating systems uses <strong>Kernel-Level</strong> threads to provide <em>concurrency</em>. However, to get most out of the threads, OS has <strong>User-Level</strong> threads which can be used by programming languages to achieve multithreading and control is with users to handle them.</p><p>The need of computational efficiency have been increasing and so does the need for concurrency. But before discussing the need of concurrency let&apos;s first discuss the difference between <em><code>concurrency</code></em> and <em><code>parallelism</code></em>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://whackd.in/content/images/2022/07/image-2.png" class="kg-image" alt="Schedulers and the Need for Concurrency" loading="lazy" width="554" height="270"><figcaption>concurrency vs parallelism</figcaption></figure><p>When we talk about <em>concurrency,</em> we need to talk about handling multiple task together but when we talk about parallelism, it&apos;s about executing those multiple tasks together. OS&apos;s kernel threads are not up to the developer to handle they are handled directly by operating system on its own and handling of task scheduling on these threads.</p><p>Programming languages have allowed developers to use <em>user level threads</em> which can be managed by the program. A <strong><em>thread</em></strong> needs to be spawned and then it can <em>run</em>, <em>wait</em> or <em>suspend</em> which can be explicitly controlled by the developers the way it is programmed. This allows task <code>preemption</code> on <strong>threads</strong> and tasks to be <em>loaded</em> or <em>unloaded</em> from execution state.</p><p>OS&apos;s thread <em>preemption</em> is a good to execute and distribute your tasks over the processor, however the <strong>preemption is costly between threads.</strong> This <em>Context-Switching</em> between the threads adds latency to the execution. However, programming languages have been evolving themselves to overcome this. In general, we see 2 paradigm&apos;s in current major programming languages to handle this.</p><p>The <strong>Async-Await</strong> <code>async-await</code> (<em>Eventloop</em>) way to handle better preemption is used in</p><ul><li>Javascript</li><li>Rust</li><li>C++</li><li>C#</li></ul><p><strong>Lightweight Threads</strong>: used in</p><ul><li>Go</li><li>Java</li></ul><p>Specially, when talk about Java&apos;s <em>reactive programming library, </em>it tries to emulate eventloop, but they are based on threads only. Let&apos;s discuss the different styles of evolving programming language efficiency of <em>preemption</em> in race of <strong>Need for Concurrency</strong>.&#x200C;</p><!--kg-card-begin: markdown--><h2 id="async-await-scheduling">Async-Await Scheduling</h2>
<!--kg-card-end: markdown--><p>In in this type of concurrency task handling , the code structured in a way such that the operation can be suspended in the sequence of flow and can be blocked using await statement. So basically the developer would split their code into different functions which can be executed concurrently.The internal framework of the programming language puts these structured functions into an event queue. On this event queue, scheduling is done such that a task get executed one by one. Whenever there is any <code>async</code> operation is encountered, task is again push to this queue.</p><p>Generally a good split of code can be decided by the operation which are blocking calls:<em> I/O</em> or <em>Network call</em> or <em>File Operation</em>. All these operations are blocking in nature and system OS calls required to perform them blocks the current thread.</p><p>This way the flow of execution of code does not have to wait for the next statement to execute, whenever the blocking operation is complete, asynchronous nature will allow the scheduled task to return back to execution state with context.</p><p>So this way the CPU thread would not need to get blocked and it&apos;s the programming language which made it possible by preempting the tasks conditionally. The executing thread never goes to weight state in there is no need for preemption by OS.</p><!--kg-card-begin: markdown--><h2 id="lightweight-threads">Lightweight Threads</h2>
<!--kg-card-end: markdown--><p>These threads are the <a href="https://openjdk.org/jeps/425?ref=whackd.in">newer thread implementation</a> of executors itself within programming language framework which would use underlying threads to achieve efficient concurrency. Also, in some languages they are also called as <em>Virtual threads</em>.</p><p>Here, this lightweight thread construct handles the execution of tasks over the OS level provided threads. This custom implementation of scheduling task follow their own algorithm to pick and distribute tasks on each existing thread.</p><!--kg-card-begin: markdown--><h3 id="golangs-scheduler">Golang&apos;s Scheduler</h3>
<!--kg-card-end: markdown--><p>Golang has the option of running this parallel runnable tasks into lightweight threads which are called <em><strong>Goroutines </strong></em>and executable runnables are defined as functions with special keyword <em><strong>go</strong></em> and executions are handled by golang&apos;s internal scheduler.</p><p>Golang would spawn fix a number of threads based on <em><strong><code>GOMAXPROCS</code></strong></em> variable or by default current<em> processors number</em>. It maintain a <em>local run queue and global run queue of Goroutines</em>. Any new runnable Goroutines are added to one of the threads local run queue and that&apos;s how it gets scheduled for execution.</p><p>However, that&apos;s not it. Whenever, there is a blocking operation inside a <em>Goroutine</em>, the scheduler will <em>preemt</em> this execution and allows next <em>Goroutine</em> to execute avoiding blocking the underlying execution for thread efficiency.</p><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2022/07/Untitled-design-1.jpg" class="kg-image" alt="Schedulers and the Need for Concurrency" loading="lazy" width="1200" height="768" srcset="https://whackd.in/content/images/size/w600/2022/07/Untitled-design-1.jpg 600w, https://whackd.in/content/images/size/w1000/2022/07/Untitled-design-1.jpg 1000w, https://whackd.in/content/images/2022/07/Untitled-design-1.jpg 1200w" sizes="(min-width: 720px) 720px"></figure><p>The suspended Goroutine can be queued to any of the available process object&apos;s local run queue. This way of switching where same Goroutine can be executed among multiple threads is also called <strong><em>Cooperative Scheduling</em></strong>.</p><p>Another caveat in the golang&apos;s scheduler is the <strong>Work-Stealing</strong> nature of virtual threads. Whenever process is free of any queued Goroutine they can steal queued routines from other <em>local</em> or <em>global queue</em>. Or also from <em>Network Poller </em>in predefined way. This way task balancing is done and none of the threads gets overloaded by Goroutines run queues.</p><!--kg-card-begin: markdown--><h3 id="kotlins-coroutines">Kotlin&apos;s Coroutines</h3>
<!--kg-card-end: markdown--><p>Kotlin routine uses continuation steps, in which each step is a structured execution fragment that can be preempted. &#xA0;Here, the lightweight routines are basically finite state machine that uses these defined steps. Continuation steps are defined by the developer which in turn are implicitly handled by Kotlin&apos;s compiler. So whenever a blocking step is encountered, the continuation step is suspended and the next step can be queued again for execution and the suspended step when completes its blocking operation can be queued back for execution.</p><!--kg-card-begin: markdown--><h3 id="javas-reactive-rxjava">Java&apos;s Reactive (RxJava)</h3>
<!--kg-card-end: markdown--><p>RxJava has various implementations of Schedulers like <code>parallel</code>, <code>elastic</code>, <code>single</code>, <code>boundedElastic</code>, <code>immediate</code>. These schedulers basically run the <code>async</code> task on the ExecutorService&apos;s threadpool which are implemented based to handle different use cases. For instance, <code>parallel</code> would use <strong>ExecutorService&apos;s</strong> <em>fixed thread pool</em> etc. and is recommended for computational tasks. But, if <em>devs </em>did not choose these schedulers wisely, can result in <strong><em>backpressure</em></strong> issue<em>.</em></p><p>The Library provides methods like <code>subscribeOn</code> , <code>publishOn</code> and <code>runOn</code> to use signalling then <em>publish</em> (<em>run</em>) the task separately on different threads and when blocking operation task is complete, we can listen to the result on <em>subscribed</em> thread.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://whackd.in/content/images/2022/07/rx-java.png" class="kg-image" alt="Schedulers and the Need for Concurrency" loading="lazy" width="1200" height="768" srcset="https://whackd.in/content/images/size/w600/2022/07/rx-java.png 600w, https://whackd.in/content/images/size/w1000/2022/07/rx-java.png 1000w, https://whackd.in/content/images/2022/07/rx-java.png 1200w" sizes="(min-width: 720px) 720px"><figcaption>RxJava Task can be published on one type and subscribed on another</figcaption></figure><p>There is also a learning curve in writing the clean code. Error logging in <em>RxJava</em> would have huge <em>stacktrace</em> and is difficult to debug and narrow down the business logic error.</p><p>Again, with <strong>Java 19&apos;s Virtual Thread</strong> <em>performance</em> can be made much more efficient since they provide better <em>preemption</em> and <em>concurrency</em>.</p><!--kg-card-begin: markdown--><h3 id="javas-virtual-threads">Java&apos;s Virtual Threads</h3>
<!--kg-card-end: markdown--><p>Originally Java&apos;s Green threads are the user-level thread that can be handled by users (<em>devs</em>) to simulate kernel-level multithreading. &#xA0;<code>Future</code> , <code>CompleteableFuture</code> , Thread <code>Executors</code> threadpools &#xA0;are already available in Java thread package to perform multithreading operations. &#xA0;With <strong><em>Java 19</em></strong>, &#xA0;these <em>classic</em> thread are now designated as <strong><em>platform threats</em></strong> while introducing a new lightweight threads called <strong><em>Virtual Threads</em></strong>. </p><p>These lightweight threads would use existing thread pool to perform execution of tasks. &#xA0;A dedicated <code>ForkJoinPool</code> <em>FIFO</em> model is used as <strong><em>Virtual Thread Scheduler</em></strong>. &#xA0;which will save the unloaded (unmounted) thread execution stack in heap, &#xA0;so that the execution can continue from the point of suspension.</p><div class="kg-card kg-callout-card kg-callout-card-green"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text">A Thread to handle a thread ?</div></div><p></p><p>In case, if the runnable code of Virtual Thread is blocking and there seem to be no way to reschedule, <code>Virtual Threads</code> would <strong><em>park</em></strong> (hold) the execution state and at this point the underlying thread would also be blocked. The execution stack of the thread would be saved be saved on JVM&apos;s heap.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://whackd.in/content/images/2022/07/JavaVirtualThreads.drawio.png" class="kg-image" alt="Schedulers and the Need for Concurrency" loading="lazy" width="738" height="413" srcset="https://whackd.in/content/images/size/w600/2022/07/JavaVirtualThreads.drawio.png 600w, https://whackd.in/content/images/2022/07/JavaVirtualThreads.drawio.png 738w" sizes="(min-width: 720px) 720px"><figcaption>Context switching on a thread from designated ForkJoinPool</figcaption></figure><p>Otherwise, the <em>Virtual Threads</em> would <em>yield</em> whenever a <strong><em>blocking operation</em></strong> is encountered and to enable this many base core <em>Java Classes</em> have been modified. &#xA0; It&apos;s really good to see that attempt been made to keep up the way of coding similar to the way it has been for older classical <em>platform</em> threads.</p><p>With great power comes great responsibility. Languages work their best for abstraction but using these new features also require learning about these schedulers well too.</p>]]></content:encoded></item><item><title><![CDATA[Automate Kubernetes deployment using Argo CD]]></title><description><![CDATA[<p>Argo CD is a declarative Git-Ops continuous delivery tool created for Kubernetes.</p><p><strong>k8s</strong> application manifests should be version controlled in a git repository. <strong><a href="https://argoproj.github.io/argo-cd/?ref=whackd.in">Argo CD </a></strong>uses the git repository as a source of truth which represent the desired state of the application.</p><p><strong>Argo CD</strong> is implemented as a <strong>k8s</strong> controller</p>]]></description><link>https://whackd.in/automate-k8s-deployment-using-argocd/</link><guid isPermaLink="false">624866e20c979a0e7a1019d3</guid><category><![CDATA[Devops]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[k8s]]></category><category><![CDATA[ArgoCD]]></category><category><![CDATA[CICD]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Prabhat Agarwal]]></dc:creator><pubDate>Tue, 21 Sep 2021 15:53:34 GMT</pubDate><media:content url="https://whackd.in/content/images/2021/09/image-14.png" medium="image"/><content:encoded><![CDATA[<img src="https://whackd.in/content/images/2021/09/image-14.png" alt="Automate Kubernetes deployment using Argo CD"><p>Argo CD is a declarative Git-Ops continuous delivery tool created for Kubernetes.</p><p><strong>k8s</strong> application manifests should be version controlled in a git repository. <strong><a href="https://argoproj.github.io/argo-cd/?ref=whackd.in">Argo CD </a></strong>uses the git repository as a source of truth which represent the desired state of the application.</p><p><strong>Argo CD</strong> is implemented as a <strong>k8s</strong> controller which continuously monitor current or live state with the desired state described in the git repository and automate the deployment of desired state in <strong>k8s</strong> environment.</p><h3 id="prerequisite">Prerequisite</h3><p>Before moving forward, some tools are required to complete the exercise,</p><ul><li>A local Kubernetes cluster, for example <a href="https://www.docker.com/products/docker-desktop?ref=whackd.in">Docker Desktop</a> or <a href="https://minikube.sigs.k8s.io/docs/start/?ref=whackd.in">Minikube</a></li><li><a href="https://kubernetes.io/docs/tasks/tools/?ref=whackd.in#kubectl">kubectl</a></li><li>Git account and git CLI</li><li>Argo CD</li></ul><p>The installation instructions for local Kubernetes cluster can be found in respective links.</p><h3 id="installing-argo-cd">Installing Argo CD</h3><p>As mentioned before, <strong>Argo CD</strong> is implemented as a custom controller, so it needs to be deployed in <strong>k8s</strong>.</p><p>Let&apos;s create a separate namespace for <strong>Argo CD</strong> installation.</p><pre><code class="language-shell">kubectl create ns argocd</code></pre><!--kg-card-begin: html--><div class="protip">
    <h5 class><span>
            <svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-bulb" width="44" height="44" viewbox="0 0 24 24" stroke-width="1.5" stroke="#2c3e50" fill="none" stroke-linecap="round" stroke-linejoin="round" style="
    width: 30px;
    fill: #ffec00;
">
                <path stroke="none" d="M0 0h24v24H0z" fill="none"/>
                <path d="M3 12h1m8 -9v1m8 8h1m-15.4 -6.4l.7 .7m12.1 -.7l-.7 .7"/>
                <path d="M9 16a5 5 0 1 1 6 0a3.5 3.5 0 0 0 -1 3a2 2 0 0 1 -4 0a3.5 3.5 0 0 0 -1 -3"/>
                <line x1="9.7" y1="17" x2="14.3" y2="17"/>
            </svg>
        </span>
        Protip
    </h5>
    <p>To switch into a namespace, if you want to work within it for long, you can use:
        <code>kubectl config set-context --current --namespace=argocd</code>
    </p>
    <p>This official way is a little complex to remember and run every time when you want to switch to a new namespace.
        To get rid of this, you can grab a tool called <code>kubens</code> which is in kubectx package for mac and
        Linux, since I am using windows, I need to install it separately using,
        <code>choco install kubens --version=0.9.1</code>
    </p>
    <p>After installing kubens, you can list down all namespaces in current context using kubens and to print out the
        current namespace you can use <code>kubens -c</code></p>
    <p>To switch to argocd namespace, we can now use
        <code>kubens argocd</code> which is much easier to remember and handy.
    </p>
</div><!--kg-card-end: html--><p>Coming back to our main section, now lets install <strong>ArgoCD</strong> in newly created namespace.</p><pre><code class="language-shell">kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml</code></pre><p>This took around <em>3-5 minutes</em>, you can check the status of deployment using <code>kubectl get deployment</code> which will give output as follows:</p><pre><code class="language-shell">NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
argocd-dex-server    1/1     1            1           5m28s
argocd-redis         1/1     1            1           5m28s
argocd-repo-server   1/1     1            1           5m28s
argocd-server        1/1     1            1           5m28s</code></pre><h3 id="access-argo-cd-api-server">Access Argo CD API Server</h3><p>By default, the Argo CD API server is not exposed with an external IP. To access the API server, choose one of the following techniques to expose the <strong>Argo CD API</strong> server:</p><!--kg-card-begin: markdown--><ol>
<li><code>kubectl patch svc argocd-server -n argocd -p &apos;{&quot;spec&quot;: {&quot;type&quot;: &quot;LoadBalancer&quot;}}&apos;</code></li>
<li><code>kubectl port-forward svc/argocd-server -n argocd 8081:443</code></li>
</ol>
<!--kg-card-end: markdown--><p>Use <code>localhost:8081</code> to access the API server in browser.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://whackd.in/content/images/2021/09/image-4.png" class="kg-image" alt="Automate Kubernetes deployment using Argo CD" loading="lazy" width="1918" height="903" srcset="https://whackd.in/content/images/size/w600/2021/09/image-4.png 600w, https://whackd.in/content/images/size/w1000/2021/09/image-4.png 1000w, https://whackd.in/content/images/size/w1600/2021/09/image-4.png 1600w, https://whackd.in/content/images/2021/09/image-4.png 1918w" sizes="(min-width: 720px) 720px"><figcaption>ArgoCD Login Page</figcaption></figure><h3 id="login-to-argo-cd">Login to Argo CD</h3><p>Default username to login to Argo CD server is <code>admin</code>, to get the password there are several ways.</p><p><strong>One</strong> way is to get the initial admin password using command below. The password is <code>base 64</code> encoded, so if on windows use Git Bash to use the decode function.</p><pre><code class="language-shell">kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath=&quot;{.data.password}&quot; | base64 -d</code></pre><p>This will return the decoded password that can be used directly.</p><p><strong>Another</strong> way is to patch the <code>argocd-secret</code> and update the <code>admin.password</code> field which should be encrypted with Bcrypt password hashing function. One can use the online tools like <a href="https://bcrypt-generator.com/?ref=whackd.in"><strong>bcrypt-generator.com</strong></a> to get the hash.</p><p>Here the updated password is <code>admin</code>.</p><pre><code>kubectl -n argocd patch secret argocd-secret -p &apos;{&quot;stringData&quot;: {&quot;admin.password&quot;: &quot;$2a$10$aDulNEmKSuPr8rUH7CvMguvkz/x5wRJuiZgXOw4cc4Zzk2RhpRpBi&quot;, &quot;admin.passwordMtime&quot;: &quot;&apos;$(date +%FT%T)&apos;&quot;}}&apos;</code></pre><p>After login, target page will show the list of application. As there are no applications, so this section is blank for now.</p><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2021/09/image-5.png" class="kg-image" alt="Automate Kubernetes deployment using Argo CD" loading="lazy" width="1920" height="903" srcset="https://whackd.in/content/images/size/w600/2021/09/image-5.png 600w, https://whackd.in/content/images/size/w1000/2021/09/image-5.png 1000w, https://whackd.in/content/images/size/w1600/2021/09/image-5.png 1600w, https://whackd.in/content/images/2021/09/image-5.png 1920w" sizes="(min-width: 720px) 720px"></figure><h3 id="login-using-argo-cd-cli">Login using Argo CD CLI</h3><p>First install <strong>Argo CD CLI</strong> as per the operating system.</p><p>For mac or Linux, use the below command to install it.</p><pre><code class="language-shell">brew install argocd</code></pre><p>For windows, follow the <a href="https://github.com/argoproj/argo-cd/releases/tag/v2.1.2?ref=whackd.in"><strong>link</strong></a>, download and add the entry in path variable.</p><p>To login through <strong>CLI</strong>, use the command mentioned below:</p><pre><code class="language-shell">argocd login localhost:8081 --username admin --password admin --insecure</code></pre><p>Argo CD installation and Login part is completed. Now the next step is to create a demo application and update k8s manifest to deploy in cluster.</p><h3 id="the-demo-application">The Demo Application</h3><p>As it already discussed that <strong>Argo CD</strong> uses git repository to automate the deployment, so a git repository is required. Here I am using github repository.</p><pre><code class="language-shell">git clone https://github.com/aprabhat/argo-cd-color-app.git</code></pre><p>After cloning, switch to <code>dev</code> branch, check for the deploy folder and take a look at deployment and service manifest files.</p><p>Now create another namespace in which the demo application will be deployed.</p><pre><code class="language-shell">kubectl create ns practice</code></pre><p>Create <strong>k8s</strong> resource of type <strong>Argo CD</strong> application using the above repo as source. For this create <em>color-app.yaml</em> file with following content. Here, <code>targetRevision</code> represents the branch name and <code>path</code> represents the location from root directory where the <strong>k8s</strong> manifest resides.</p><pre><code class="language-yaml">apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: color-app
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/aprabhat/argo-cd-color-app.git
    targetRevision: dev
    path: deploy
  destination:
    server: https://kubernetes.default.svc
    namespace: practice</code></pre><p>Save and apply this yaml file.</p><pre><code>kubectl apply -f app.yaml</code></pre><p>This will create a resource of type application. The new application is now available in <strong>Argo CD</strong> dashboard.</p><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2021/09/image-8.png" class="kg-image" alt="Automate Kubernetes deployment using Argo CD" loading="lazy" width="1920" height="904" srcset="https://whackd.in/content/images/size/w600/2021/09/image-8.png 600w, https://whackd.in/content/images/size/w1000/2021/09/image-8.png 1000w, https://whackd.in/content/images/size/w1600/2021/09/image-8.png 1600w, https://whackd.in/content/images/2021/09/image-8.png 1920w" sizes="(min-width: 720px) 720px"></figure><p>The app list and status can also be fetched by using Argo CD CLI using <code>argocd app list</code> command.</p><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2021/09/image-9.png" class="kg-image" alt="Automate Kubernetes deployment using Argo CD" loading="lazy" width="1898" height="82" srcset="https://whackd.in/content/images/size/w600/2021/09/image-9.png 600w, https://whackd.in/content/images/size/w1000/2021/09/image-9.png 1000w, https://whackd.in/content/images/size/w1600/2021/09/image-9.png 1600w, https://whackd.in/content/images/2021/09/image-9.png 1898w" sizes="(min-width: 720px) 720px"></figure><p>Click on the app tile in UI and check for the resources created for the application.</p><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2021/09/image-10.png" class="kg-image" alt="Automate Kubernetes deployment using Argo CD" loading="lazy" width="1920" height="904" srcset="https://whackd.in/content/images/size/w600/2021/09/image-10.png 600w, https://whackd.in/content/images/size/w1000/2021/09/image-10.png 1000w, https://whackd.in/content/images/size/w1600/2021/09/image-10.png 1600w, https://whackd.in/content/images/2021/09/image-10.png 1920w" sizes="(min-width: 720px) 720px"></figure><p>The app is created but not deployed as the <code>syncPolicy</code> is not set and default to manual. So a manual synchronization is required.</p><pre><code>argocd app sync color-app</code></pre><p>Once the sync is completed, the application details UI will also be updated with all the resources created for k8s application.</p><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2021/09/image-11.png" class="kg-image" alt="Automate Kubernetes deployment using Argo CD" loading="lazy" width="1578" height="640" srcset="https://whackd.in/content/images/size/w600/2021/09/image-11.png 600w, https://whackd.in/content/images/size/w1000/2021/09/image-11.png 1000w, https://whackd.in/content/images/2021/09/image-11.png 1578w" sizes="(min-width: 720px) 720px"></figure><p>Now let&apos;s update the application to be on auto sync mode. To do so, update the color-app.yaml file for <code>syncPolicy</code> and apply it again using <code>kubectl apply -f color-app.yaml</code>.</p><pre><code class="language-yaml">apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: color-app
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/aprabhat/argo-cd-color-app.git
    targetRevision: dev
    path: deploy
  destination:
    server: https://kubernetes.default.svc
    namespace: practice
  syncPolicy:
    automated: {}</code></pre><p>To test the auto sync, update the <a href="https://github.com/aprabhat/argo-cd-color-app/blob/dev/deploy/color-deployment.yaml?ref=whackd.in">deployment.yaml</a> file for replicas and push the change to git repository. Check the application details in <strong>Argo CD UI</strong> dashboard for the pods if they scaled to 2.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://whackd.in/content/images/2021/09/image-12.png" class="kg-image" alt="Automate Kubernetes deployment using Argo CD" loading="lazy" width="1580" height="634" srcset="https://whackd.in/content/images/size/w600/2021/09/image-12.png 600w, https://whackd.in/content/images/size/w1000/2021/09/image-12.png 1000w, https://whackd.in/content/images/2021/09/image-12.png 1580w" sizes="(min-width: 720px) 720px"><figcaption>Argo CD Dashboard</figcaption></figure><p>This time as <code>syncPolicy</code> is set to automated, there is no need to do a manual sync.</p><p>Finally you can check for the deployed application. First describe the service from practice namespace using <code>kubectl -n practice describe svc color-service</code></p><pre><code>Name:                     color-service
Namespace:                practice
Labels:                   app.kubernetes.io/instance=color-app
Annotations:              &lt;none&gt;
Selector:                 app=color
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.20.25
IPs:                      10.110.20.25
LoadBalancer Ingress:     localhost
Port:                     &lt;unset&gt;  3000/TCP
TargetPort:               3000/TCP
NodePort:                 &lt;unset&gt;  30007/TCP
Endpoints:                10.1.0.19:3000,10.1.0.20:3000
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   &lt;none&gt;</code></pre><p>To access the sample application deployed use the localhost:&lt;NodePort&gt;. In this case it is localhost:30007, open this in the browser. The nice UI will appear on the browser window.</p><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2021/09/image-16.png" class="kg-image" alt="Automate Kubernetes deployment using Argo CD" loading="lazy" width="1920" height="1030" srcset="https://whackd.in/content/images/size/w600/2021/09/image-16.png 600w, https://whackd.in/content/images/size/w1000/2021/09/image-16.png 1000w, https://whackd.in/content/images/size/w1600/2021/09/image-16.png 1600w, https://whackd.in/content/images/2021/09/image-16.png 1920w" sizes="(min-width: 720px) 720px"></figure><p>In few simple steps, you are able to automate the deployment of k8s application using Argo CD. </p><p>Although this post is not covering all the scenarios or production system use cases but it is a good place to have a glimpse of how Argo CD works and to understand how to automate k8s application deployment with Argo CD.</p><h3 id="conclusion">Conclusion</h3><p>In the era of microservice, as the number of <strong>k8s</strong> workloads keeps on increasing, deploying the tens or hundred of <em>pods</em> at the same time is a tedious task.</p><p>In this situation, <strong>Argo CD</strong> is a great tool which enable teams &#xA0;to automate the deployment on multiple environments(testing, staging production). <strong>Argo CD</strong> will definitely helps the scrum teams to save time by automating the Continuous Delivery process and reduce common errors.</p>]]></content:encoded></item><item><title><![CDATA[Code Generation using Annotation Processor in Java]]></title><description><![CDATA[Generating REST Client from Swagger Documentation]]></description><link>https://whackd.in/code-generation-java-annotation-processor-swagger/</link><guid isPermaLink="false">624866e20c979a0e7a1019c6</guid><category><![CDATA[programming]]></category><category><![CDATA[java]]></category><category><![CDATA[swagger]]></category><category><![CDATA[lombok]]></category><category><![CDATA[spring]]></category><category><![CDATA[micronaut]]></category><category><![CDATA[rest]]></category><category><![CDATA[microservices]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Rohit Pal]]></dc:creator><pubDate>Sat, 04 Sep 2021 11:04:34 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1655720840699-67e72c0909d1?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE1fHxqYXZhJTIwYWl8ZW58MHx8fHwxNjg1MzQ2NzkzfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1655720840699-67e72c0909d1?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDE1fHxqYXZhJTIwYWl8ZW58MHx8fHwxNjg1MzQ2NzkzfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Code Generation using Annotation Processor in Java"><p>Java annotation processing <a href="https://blogs.oracle.com/darcy/jdk-6-build-101-jsr-269-api-changes?ref=whackd.in">JSR 269</a> is a standardized API to provide allow Java Compiler to runtime validate the code and generate source or byte code.</p><p>This provides a way to generate compile time code generation for java projects. Dynamically annotating a class and enhancing or introducing new behavior is a key feature we have seen in frameworks like <a href="https://spring.io/?ref=whackd.in">Spring</a>. While the behavior modification was creation of <a href="https://en.wikipedia.org/wiki/Proxy_pattern?ref=whackd.in">Proxy</a> around certain <code>Class</code> and provide additional features underneath through <a href="https://www.oracle.com/technical-resources/articles/java/javareflection.html?ref=whackd.in">Reflection</a>. This feature was created at runtime in Java.</p><p>This feature is used by lot of existing java libraries already like <a href="https://projectlombok.org/?ref=whackd.in"><strong>Lombok</strong></a>, <a href="https://mapstruct.org/?ref=whackd.in"><strong>Mapstruct</strong></a>, <a href="https://immutables.github.io/?ref=whackd.in"><strong>Immutables</strong></a>. The JVM based <a href="https://micronaut.io/?ref=whackd.in"><strong>Micronaut Framework</strong></a> is also been written from scratch to improve performance to not just use reflection and rather use compile time code generated, which would improve performance in runtime.</p><p><strong>Micronaut</strong> introduced reflection-free approach to <em>Dependency Injection</em> and <em>AOP</em>. Framework&apos;s key feature is the usage of <strong>Annotation Processing</strong> which could generate all new functionality at compile-time instead of runtime as in the case of Spring which result in faster startup times.</p><p><strong>Lombok</strong> goes beyond <a href="https://blogs.oracle.com/darcy/jdk-6-build-101-jsr-269-api-changes?ref=whackd.in">JSR 269</a>  and adds additional code to modify the internal compiler&apos;s <code>AST</code> while <a href="https://blogs.oracle.com/darcy/jdk-6-build-101-jsr-269-api-changes?ref=whackd.in">JSR 269</a> was only meant to rather generate a new source code; which in alleged to break without warning when updating to a new compiler version. </p><p>While using with microservices, there are times to quickly integrate <strong>REST</strong> Service into your code. While Java Ecosystem is quite rich with http libraries like <strong>Apache HttpClient</strong>, <strong>OkHttp</strong> etc and one of my favourites, the declarative client <strong>OpenFeign</strong>; we explore the possibility to auto generate a client.</p><p>Let&apos;s use the code generation feature and auto generate a REST Client Adaptor directly from a public hosted <strong>Swagger Documentation</strong> - <a href="https://petstore.swagger.io/?ref=whackd.in"><strong>The PetStore</strong></a><strong> </strong>&#x1F415;.</p><h3 id="basic-setup">Basic Setup</h3><p>To enable Compile time Annotation Processing we first need need to enable Annotation Processing configuration is your IDE; which would tell your compiler to enable Annotation Processor. If you are using IntelliJ IDEA you can follow <a href="https://www.jetbrains.com/help/idea/annotation-processors-support.html?ref=whackd.in">link</a>.</p><p>We setup a basic gradle project into 2 separate modules:</p><ul>
<li><em><strong>library</strong></em> - The Annotation Processor lib</li>
<li><em><strong>playground</strong></em> - To test the annotation or your target project of usage</li>
</ul>
<h3 id="library-module">Library Module</h3><p>In this Module we create a <code>build.gradle</code> with following dependencies</p><pre><code class="language-gradle">dependencies {
    compile &apos;io.swagger.parser.v3:swagger-parser:2.0.27&apos;
    compile &apos;com.squareup:javapoet:1.13.0&apos;
    compile &apos;com.fasterxml.jackson.core:jackson-core:2.12.5&apos;
    compile &apos;org.apache.commons:commons-collections4:4.4&apos;
    compile &apos;org.apache.commons:commons-text:1.9&apos;
    testCompile group: &apos;junit&apos;, name: &apos;junit&apos;, version: &apos;4.12&apos;
}</code></pre><p>Create a Custom Annotation <code>SwaggerClient</code> which when annotated to a class, would trigger the compilation and in turn leads to generation of any new code. </p><pre><code class="language-java">@Retention(RetentionPolicy.SOURCE)
@Target(ElementType.TYPE)
public @interface SwaggerClient {
    String location();
}</code></pre><p>Enable Annotation Processor via <code>META-INF</code> configuration in this submodule. </p><p>To do this, create a file in directory <code>resources/META-INF/services/</code> named <code>javax.annotation.processing.Processor</code> and add entry for your Annotation Processor classes as <strong>FQCN. </strong>We would name our annotation processor class as <code>SwaggerClientProcessor</code> . It can have list of Annotation Processors per line. </p><p>For us, file&apos;s content would be</p><pre><code>com.whackd.library.SwaggerClientProcessor
</code></pre><p>Now, Create the <code>SwaggerClientProcessor</code> Class. This processor class has to be extended from <code>javax.annotation.processing.AbstractProcessor</code></p><pre><code class="language-java">@SupportedAnnotationTypes(&quot;com.whackd.library.SwaggerClient&quot;)
@SupportedSourceVersion(SourceVersion.RELEASE_11)
public class SwaggerClientProcessor extends AbstractProcessor {
    private Messager messager;
    private Filer filer;
    private Elements elements;
    private Map<string, element> markedClasses;
    ...
}</string,></code></pre><p>We would have to implement 2 methods: <strong><em>init</em></strong> and <strong><em>process</em></strong></p><pre><code class="language-java">@Override
public synchronized void init(ProcessingEnvironment pEnv) {
    super.init(pEnv);
    filer = pEnv.getFiler();
    messager = pEnv.getMessager();
    elements = pEnv.getElementUtils();
    markedClasses = new HashMap&lt;&gt;();
}</code></pre><p>In the <strong><em>process </em></strong>method, we start processing all classes which are marked with <code>@SwaggerClient</code> annotation.</p><pre><code class="language-java">@Override
public boolean process(Set<? extends TypeElement> annotations, RoundEnvironment roundEnv) {
    for (Element element : roundEnv.getElementsAnnotatedWith(SwaggerClient.class)) {
        if (element.getKind() != ElementKind.CLASS) {
            messager.printMessage(Diagnostic.Kind.ERROR, &quot;Can be applied to class.&quot;);
            return true;
        }

        TypeElement typeElement = (TypeElement) element;
        markedClasses.put(typeElement.getSimpleName().toString(), element);
    } 
    ...
}</code></pre>
<p>Get the json target url location from Custom Annotation&apos;s <code>location</code> property, and start parsing the <strong>Swagger</strong> <code>Json</code></p><pre><code class="language-java">Swagger swagger;
try {
    final String location = element.getAnnotation(SwaggerClient.class).location();
    swagger = new Swagger20Parser().read(location, null);
} catch (IOException e) {
    messager.printMessage(Diagnostic.Kind.ERROR, &quot;Error fetching Swagger API Metadata.&quot;);
    return true;
}</code></pre>
<p>Now, using <strong>JavaPoet</strong> library we add code to generate source Java Class by iterating the values from Swagger Parser. Here&apos;s small snippet of the code.</p><pre><code class="language-java">final Map<string, model> definitions = swagger.getDefinitions();
for (Map.Entry<string, path> pathEntry : swagger.getPaths().entrySet()) {
    String path = pathEntry.getKey();
    Path pathInfo = pathEntry.getValue();
    final List<string> paramKeys = parsePathParams(path);

    for (Map.Entry<httpmethod, operation> opsEntry : pathInfo.getOperationMap().entrySet()) {
        HttpMethod httpMethod = opsEntry.getKey();
        Operation operation = opsEntry.getValue();
        List<parameter> headerParameters = new ArrayList&lt;&gt;();
        List<parameter> queryParameters = new ArrayList&lt;&gt;();

        MethodSpec.Builder methodBuilder = MethodSpec
                .methodBuilder(operation.getOperationId())
                .addException(Exception.class)
                .addModifiers(Modifier.PUBLIC, Modifier.STATIC);
    ...            </parameter></parameter></httpmethod,></string></string,></string,></code></pre>
<br><h3 id="playground-module">Playground Module</h3><p>In this module, we simple use the <code>Main</code> class and import our <code>library.gradle</code> submodule.</p><pre><code class="language-gradle">dependencies {
    implementation project(&apos;:library&apos;)
    annotationProcessor project(&apos;:library&apos;)
    testCompile group: &apos;junit&apos;, name: &apos;junit&apos;, version: &apos;4.12&apos;
}</code></pre>
<br><h3 id="showdown">Showdown</h3><p>Let&apos;s start testing the integration for our Custom Annotation <code>@SwaggerClient</code>. We simple mark our project&apos;s <code>Main</code> class with required annotation giving <code>Swagger</code> Documentation HTTP url. After adding annotation remember to recompile <code>Main</code> class if auto-generation does not happen. If your compile target has been set to gradle, then recompile the <code>Main</code> class file manually. Our implementation would generate a new class <code>MainClient</code> which would have all methods corresponding to the endpoints of Swagger Documentation.</p><blockquote>The compilation of Annotation Processor does not happen on the same JVM as source on which annotation was integration on. Instead it happens separately in a different JVM. Which in turn uses javac to compile all the classes with <code>-processor</code> as a flag. Gradle helps with this compilation using the <code>annotationProcessor</code> <strong>DependencyHandler.</strong></blockquote><pre><code>annotationProcessor project(&apos;:library&apos;)</code></pre><p>After recompilation, we check the <code>MainClient</code> class generated in the generated sources build directory. It got compiled and added to our build generated sources annotation directory. Cool ! &#x1F44D;</p><pre><code>playground/build/generated/sources/annotationProcessor/java/main/com/whackd/playground/MainClient.java</code></pre><p>Now the <em>Magic. </em>We have autocomplete available while coding! This is the exact feature we want to achieve. &#x1F44F;&#x1F3FD;</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://whackd.in/content/images/2021/09/image-2.png" class="kg-image" alt="Code Generation using Annotation Processor in Java" loading="lazy" width="1506" height="794" srcset="https://whackd.in/content/images/size/w600/2021/09/image-2.png 600w, https://whackd.in/content/images/size/w1000/2021/09/image-2.png 1000w, https://whackd.in/content/images/2021/09/image-2.png 1506w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Generated class with methods as endpoints</span></figcaption></figure><p><strong>Execute</strong> it now already &#x1F680;</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://whackd.in/content/images/2021/09/image-3.png" class="kg-image" alt="Code Generation using Annotation Processor in Java" loading="lazy" width="1442" height="860" srcset="https://whackd.in/content/images/size/w600/2021/09/image-3.png 600w, https://whackd.in/content/images/size/w1000/2021/09/image-3.png 1000w, https://whackd.in/content/images/2021/09/image-3.png 1442w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Logs list of all available pets</span></figcaption></figure><p><strong>Concluding</strong> this post, code generation is a beautiful idea to automate and speed up your development process. Easier to integrate, makes your teams adhere to single coding style strategy since it is compiled time validated. </p><p>No need to spend time on Code reviews, Code creation for new services and Code style changes every time a new integration is required.</p>]]></content:encoded></item><item><title><![CDATA[Demystifying JSON Web Token (JWT) Part-1]]></title><description><![CDATA[<p>JSON Web Token or <strong>JWT</strong> or sometimes pronounced as &apos;jot&apos; is an open standard (<a href="https://tools.ietf.org/html/rfc7519?ref=whackd.in">RFC-7519</a>) for transferring claims in a compact, printable and secure manner along with a signature that provide its authenticity between two parties as a JSON object. </p><p>JWTs can be signed using JSON Web Signature</p>]]></description><link>https://whackd.in/demystifying-jwts/</link><guid isPermaLink="false">624866e20c979a0e7a1019c9</guid><category><![CDATA[programming]]></category><category><![CDATA[Authorization]]></category><category><![CDATA[java]]></category><dc:creator><![CDATA[Prabhat Agarwal]]></dc:creator><pubDate>Mon, 23 Aug 2021 04:42:55 GMT</pubDate><media:content url="https://whackd.in/content/images/2023/09/jwt_img.png" medium="image"/><content:encoded><![CDATA[<img src="https://whackd.in/content/images/2023/09/jwt_img.png" alt="Demystifying JSON Web Token (JWT) Part-1"><p>JSON Web Token or <strong>JWT</strong> or sometimes pronounced as &apos;jot&apos; is an open standard (<a href="https://tools.ietf.org/html/rfc7519?ref=whackd.in">RFC-7519</a>) for transferring claims in a compact, printable and secure manner along with a signature that provide its authenticity between two parties as a JSON object. </p><p>JWTs can be signed using JSON Web Signature (<a href="https://tools.ietf.org/html/rfc7515?ref=whackd.in">RFC-7515</a>) and/or encrypted using JSON Web Encryption (<a href="https://tools.ietf.org/html/rfc7516?ref=whackd.in">RFC-7516</a>) and that provides a powerful and secured solution for transferring information in many different situations.</p><p>In this section we will focus on unencrypted JWTs, and will take the encrypted one in next part.</p><h2 id="structure-of-jwt">Structure of JWT</h2><p>JWT&apos;s contains 3 different components separated by dots(.)</p><ul><li>Header</li><li>Payload</li><li>Signature/Encryption Data</li></ul><p>The header and payload are mandatory and have certain structure(JSON). The third part(not a JSON object itself) which is signature depends upon the algorithm used for signing and can be omitted in case of unsecured/unencrypted JWTs.</p><p>JWT usage URL safe base64 encoding where &apos;+&apos; and &apos;/&apos; are substituted by &apos;-&apos; and &#xA0; &apos;_&apos; characters respectively. The resulting sequence is a string with format <code>header.payload.signature</code> and look like the following,</p><blockquote>eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c</blockquote><p>In this example, the base64url decoded header is</p><pre><code>{
  &quot;alg&quot;: &quot;HS256&quot;,
  &quot;typ&quot;: &quot;JWT&quot;
}</code></pre><p>the decoded payload is</p><pre><code>{
  &quot;sub&quot;: &quot;1234567890&quot;,
  &quot;name&quot;: &quot;John Doe&quot;,
  &quot;iat&quot;: 1516239022
}</code></pre><p>and the last part is the secret required to verify the signature.</p><p>Now lets see the details of mandatory part of JWT which is the <em>header</em> and the <em>payload.</em></p><h2 id="header">Header</h2><p>Also known as JOSE(<strong>J</strong>SON <strong>O</strong>bject <strong>S</strong>igning and <strong>E</strong>ncryption) Header is the first part of JWT token. There can be several fields in the header depends upon the type of JWT, for example, the only mandatory field for the unencrypted JWT is <em>alg </em>with value as none. For a signed JWTs, some fields can be <em>alg, jku, kid, typ.</em></p><p>The header part is a JSON object and has the following format</p><pre><code>{
  &quot;alg&quot;: &quot;RS256&quot;, // none in case of unencrypted jwt
  &quot;jku&quot;: &quot;url to the public key set&quot;,
  &quot;kid&quot;: &quot;key-id-1&quot;, //optional
  &quot;typ&quot;: &quot;JWT&quot;
}</code></pre><p>It is possible to add additional user defined claims to the header.</p><h2 id="payload">Payload</h2><p>Just like the header, payload is a JSON object. This is the part where all the user related data is added. It can contain claims with specific meaning known as <em>registered</em> <em>claim</em> along with some personal user data, although no claim is mandatory.</p><p>There are several reserved claims as per JWT specification that are non mandatory but seven claims are recommended to have for better interoperability. The seven claims are as follows,</p><ul><li><strong>iss</strong> (issuer): party that issued the JWT</li><li><strong>sub</strong> (subject): Identifies the user</li><li><strong>aud</strong> (audience): Recipients of JWT or the application that reading the data from JWT</li><li><strong>exp</strong> (expiration time): Time (seconds since epoch) after which the JWT expires</li><li><strong>nbf</strong> (not before time): Time (seconds since epoch) from which JWT is considered as valid.</li><li><strong>iat</strong> (issued at time): Time at which the JWT was issued</li><li><strong>jti</strong> (JWT ID): Unique identifier; can be used to differentiate JWT from similar content.</li></ul><p>You can see the complete list of reserved claims <a href="https://www.iana.org/assignments/jwt/jwt.xhtml?ref=whackd.in#claims">here</a>.</p><h2 id="unencrypted-jwts">Unencrypted JWTs</h2><p>So far we have learned about the header and the payload part which is enough to construct a unencrypted JWT token.</p><p>Unencrypted JWTs formed with simple header</p><pre><code>{
  &quot;alg&quot;: &quot;none&quot;,
  &quot;typ&quot;: &quot;JWT&quot;
}</code></pre><p>and with payload</p><pre><code>{
  &quot;sub&quot;: &quot;user108&quot;,
  &quot;name&quot;: &quot;Shakal&quot;,
  &quot;iat&quot;: 1629690269
}</code></pre><p>now lets create a unencrypted JWT using these two parts. The pseudo code for this would be as follows,</p><pre><code>token = base64urlEncode(header) + &quot;.&quot; + base64urlEncode(payload) + &quot;.&quot;</code></pre><p>You can use any coding language to write encode and decode functions. Here I am using java 8 Base64 encoding that is part of JDK</p><pre><code class="language-Java">import java.nio.charset.StandardCharsets;
import java.util.Base64;
public String base64urlEncode(String raw) {
    return Base64.getUrlEncoder().withoutPadding().encodeToString(raw.getBytes(StandardCharsets.UTF_8));
}</code></pre><p>We also need to remove the trailing equal signs (=), for this we are using the withoutPadding<code>()</code> function. The resulting string will look like</p><blockquote>ewogICJhbGciOiAibm9uZSIsCiAgInR5cCI6ICJKV1QiCn0<strong>.</strong>ewogICJzdWIiOiAidXNlcjEwOCIsCiAgIm5hbWUiOiAiU2hha2FsIiwKICAiaWF0IjogMTYyOTY5MDI2OQp9<strong>.</strong></blockquote><h2 id="conclusion">Conclusion</h2><p>Now we have the basic understanding of structure of JWTs and different parts used to create JWTs. We discussed about the first two parts of JWT which is the <em>header</em> and the <em>payload</em> and created a unsecured JWT. <br>I real world scenarios, the use of unsecured JWTs is very rare or not at all.</p><p>In the next part, we will discuss about encrypted or secured JWTs and have more details on signature part.</p>]]></content:encoded></item><item><title><![CDATA[Internet of Value (IoV) and NFTs]]></title><description><![CDATA[Our ever changing desires and internet usage is creating new dimensions of money generation trends]]></description><link>https://whackd.in/internet-of-value-iov-and-nfts/</link><guid isPermaLink="false">624866e20c979a0e7a1019cb</guid><category><![CDATA[bitcoin]]></category><category><![CDATA[blockchain]]></category><category><![CDATA[nft]]></category><category><![CDATA[programming]]></category><category><![CDATA[wazirx]]></category><category><![CDATA[crypto]]></category><category><![CDATA[ethereum]]></category><category><![CDATA[welcome]]></category><dc:creator><![CDATA[Rohit Pal]]></dc:creator><pubDate>Sun, 22 Aug 2021 10:27:43 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1502920514313-52581002a659?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDF8fGFydCUyMG1vbmV5fGVufDB8fHx8MTYyOTYyNzQwNA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1502920514313-52581002a659?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDF8fGFydCUyMG1vbmV5fGVufDB8fHx8MTYyOTYyNzQwNA&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Internet of Value (IoV) and NFTs"><p></p><p><strong>Changes</strong> on internet have been progressively rapid with more and more usage. Our habits, behavior, work culture, entertainment, interactions have been greatly impacted as a result. With more and more advanced new tech coming up, there is arising a need to create our new desires, likes or dislikes to be quantified and become of value on Internet.</p><p>But, the digitalization of data always has been that it can be replicated, duplicated easily which has caused many things on Internet to lose value as soon they are copied to Internet. Any document, media once copied as digital can be copied and share with single Copy Paste operation; which let work or a thing to lose its value.</p><p>Any new work of value by Creators around the world have been facing this problem for a while. Music artists, Cinema concerned about their work to be copied and shared on Internet via Private network cause them to lose value and money of their work. A research paper once leaked or shared across Internet with years of work; lose its value in few seconds.</p><blockquote>Money not necessary is a just a currency but is a thing of value.</blockquote><p><strong>Blockchain</strong> has given option to resolve this problem. With Blockchain we can a thing wrapped in a transaction as hash and present on a Blockchain network since unique hash in the chain. Which means none like it exists on the chain and it cannot be modified (or copied) and has been agreed to upon by members on the Blockchain as a valid transaction.</p><blockquote>There is no one like me exist on the network.</blockquote><p><u>This has paved a way to have something exist on Internet as a Value (IoV) which can trigger people&apos;s desires and can be fulfilled and quantified by a certain value based on it&apos;s demand</u>.</p><!--kg-card-begin: html--><div style="height: 40vmin;min-height: 360px"><script src="https://cdn.jsdelivr.net/ghost/signup-form@~0.1/umd/signup-form.min.js" data-background-color="#F1F3F4" data-text-color="#000000" data-button-color="#00ab6b" data-button-text-color="#FFFFFF" data-title="Whackd" data-description="Thoughts, stories and ideas." data-icon="https://whackd.in/content/images/size/w192h192/size/w256h256/2023/09/mandala--8-.png" data-site="https://whackd.in" async></script></div><!--kg-card-end: html--><p></p><p>This is the concept of <strong>Internet of Value</strong> and in the form of <strong>Non-fungible Tokens (NFTs)</strong> which once released by owner Content creators as a value, can be distributed over the Internet or via a marketplace.</p><p>In fact. this has been allowed new jobs in the market and people are specifically calling themselves as <strong>NFT Artists</strong> who would sell their work on Blockchain as NFTs and which can guarantee them money or a Crypto coin as value in return. This allows them to be kings of their work as they create value and also become investors in Blockchain. Arts are being shared as Videos or GIFs or Jpegs.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://whackd.in/content/images/2021/08/image.webp" class="kg-image" alt="Internet of Value (IoV) and NFTs" loading="lazy" width="2000" height="1001" srcset="https://whackd.in/content/images/size/w600/2021/08/image.webp 600w, https://whackd.in/content/images/size/w1000/2021/08/image.webp 1000w, https://whackd.in/content/images/size/w1600/2021/08/image.webp 1600w, https://whackd.in/content/images/2021/08/image.webp 2078w" sizes="(min-width: 720px) 720px"><figcaption>A NFT art on WazirX</figcaption></figure><p>This is what is being referred to as a Creator Economy. Individuals with more creativity power would be given more value on Internet.</p><blockquote>Creators are &#x1F451;. Builders are &#x1F451;. Developers are &#x1F451;</blockquote><p>Let&apos;s now try to understand the current state of Internet. Most content creation on Internet is build on centralized platforms owned by large corporations. Any thing of value would be taken and accept to by a medium and then would be transferred to target by having certain transaction fee.</p><p>Centralized corporations can motivate, unmotivate, allow/disllow your work on Internet. Things that matter to them would be shared and anything else that could be of value would not led to become a thing on its own. Peer-to-Peer networks resolves this by having network connection between then directly.</p><p>This involvement of a third-party when transferring something of value is true for Social Media Network Content, Money and even data on chat applications like WhatsApp. </p><blockquote>Data provided to a mediator can not be trusted on which they are build making users of unknowingly a part of their data farming.</blockquote><p>This however, even technically right now, this needs some improvements in terms of a common protocol in Blockchain to have <strong>seamless</strong>, <strong>interoperable</strong> and <strong>reliable transaction</strong> across different types of currency Blockchain and networks.</p><p>As of now developers across the world are more focussed on solving these problems while also trying to come up with new products. There would be some interesting frameworks and new protocols upcoming for sure in near future which possibly would changes the current state of Internet possibly forever.</p><p>Till then keep living the curious life &#x1F44D;&#x1F3FD;.</p><p></p>]]></content:encoded></item><item><title><![CDATA[Decentralized Applications would change Web Apps]]></title><description><![CDATA[Recent progress in Decentralized Apps could lead to big changes in Web Apps and Social Media Platforms]]></description><link>https://whackd.in/recent-trends-in-decentralized-applications/</link><guid isPermaLink="false">624866e20c979a0e7a1019ca</guid><category><![CDATA[blockchain]]></category><category><![CDATA[nft]]></category><category><![CDATA[bitcoin]]></category><category><![CDATA[wazirx]]></category><category><![CDATA[doge]]></category><category><![CDATA[twitter]]></category><category><![CDATA[programming]]></category><category><![CDATA[crypto]]></category><dc:creator><![CDATA[Rohit Pal]]></dc:creator><pubDate>Wed, 18 Aug 2021 05:11:05 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1547280746-0e984cc4ca31?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDMwfHxuZXR3b3JrfGVufDB8fHx8MTYyOTI2MzU3Nw&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1547280746-0e984cc4ca31?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDMwfHxuZXR3b3JrfGVufDB8fHx8MTYyOTI2MzU3Nw&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Decentralized Applications would change Web Apps"><p></p><p>While the tech world is using words to promote Cryptocurrency as <strong>Bitcoin</strong>, <strong>Ether</strong>, <strong>Cardano</strong>, <strong>Polkadot</strong>, <strong>Dogecoin</strong> and a whole lot of buzz and excitement laid over with almost overloaded exchanges of Crypto currency at the moment.</p><p>India is not far behind in this race of coin exchanges. In house exchanges like <a href="https://wazirx.com/?ref=whackd.in"><strong>WazirX</strong></a>, <a href="https://coindcx.com/?ref=whackd.in"><strong>CoinDCX</strong></a>, <a href="https://zebpay.com/?ref=whackd.in"><strong>Zebpay</strong></a>, <a href="https://coinswitch.co/in?ref=whackd.in"><strong>CoinSwitch Kuber</strong></a> and <a href="https://www.unocoin.com/?ref=whackd.in"><strong>UnoCoin</strong></a><strong> </strong>have been around now for a while and have been consistently improving on investment platforms for local market allowing customers to join in the network of least as 10 Rupees of base investment.</p><p>Big Celebrity tech innovators like Elon Musk with their single tweets made the market capable enough to invite new and old investors to join in the DogeCoin. Meme marketing as usual in a trend used quite a lot recently for marketing.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://whackd.in/content/images/2021/08/image-1.webp" class="kg-image" alt="Decentralized Applications would change Web Apps" loading="lazy" width="1600" height="1304" srcset="https://whackd.in/content/images/size/w600/2021/08/image-1.webp 600w, https://whackd.in/content/images/size/w1000/2021/08/image-1.webp 1000w, https://whackd.in/content/images/2021/08/image-1.webp 1600w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Elon presenting Doge</span></figcaption></figure><p>While the tech progression in the direction of crypto have been good, however the new investors have found mixed impressions of the exchange platforms because of the quick spiked changes in the exchange rate.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.deseret.com/2021/5/19/22443663/dogecoin-drop-china-ban?ref=whackd.in"><div class="kg-bookmark-content"><div class="kg-bookmark-title">The most likely reason Dogecoin dropped 40% in 24 hours</div><div class="kg-bookmark-description">Dogecoin had a massive drop as other cryptocurrencies fell in the last 24 hours. Why did it drop 40%? Why did Doge face a massive dip?</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://cdn.vox-cdn.com/uploads/chorus_asset/file/21958854/deseret-192x192.0.png" alt="Decentralized Applications would change Web Apps"><span class="kg-bookmark-author">Deseret News</span><span class="kg-bookmark-publisher">Herb Scribner</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://cdn.vox-cdn.com/thumbor/b9UMF9qDtswNKMRPQbo0mHWSmVQ=/0x215:3000x1786/fit-in/1200x630/cdn.vox-cdn.com/uploads/chorus_asset/file/22490992/Dogecoin_1_Red_Website_Illustration.jpg" alt="Decentralized Applications would change Web Apps"></div></a></figure><p>These sudden drops in recent times have been due to multiple reason, last major being China&apos;s grasp on Cryptocurrency banning of miners.</p><p>Cryptocurrency as a tech is absolutely fascinating; however the solution to decentralized applications are still limited. One question stayed in place for long is the energy consumption and its impact on environment and climate change. Lot of new solutions to energy consumption has been met with consensus solution like Proof-of-Authority but has been argued to be a centralized network consisting of authorized users.</p><p>Another aspect of Cryptocurrency is the base of this technology which is <strong>Blockchain. </strong></p><p><strong>Blockchain is the source of truth as a chain which cannot be modified and reinforces facts with consensus over the Blockchain P2P network.</strong></p><p>Being able to store or reinforce this truth via consensus allows it to create processes that are decentralized over the network. But, this theory of being online as a node in Bitcoin is still not accepted by lot of people as of now. Certainly, there would be hardware upcoming in the market as soon the network and market grows.</p><p><em><u>Decentralized Applications</u></em> (DApps) also are labelled as Web3 is also picking as a standalone technology which is not a cryptocurrency based system but shares base technology of Blockchain.</p><p>Presence of lot of Social Media platform, be it Twitter, Instagram, Facebook, Snapchat, WhatsApp and even short video driven content based applications like Tiktok has given platform to creators, create their art, content, share thoughts over Social Media easily. Even Youtube has been experimenting with Youtube Shorts as short videos. So much so, that mixed with targeted marketing now creators could directly get monetary benefit to even pursue this as a viable career option. But still factors defining value of a content are vague; which makes usage of these apps limited to lot of users.</p><p>Blockchain in the form of NFT (A Non-Fungible Token) allows any content on the Internet to be monetized in itself and have a value of its own. <u>Changing habits and more young generation being on internet makes it possible to have something to exist as a value on internet; whose impact also has been accelerated by Covid-19 making more activity on internet</u>.</p><p>These new tech are upcoming and are quickly being experimented with now.</p><p><strong>Facebook&apos;s Deim</strong>, A blockchain based payment system has already been labelled as a digital currency; with now Pontem network in place to carry out transactions. So now any content could possibly be shared with people would be monetized and direct digital purchases can happen in future.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://pontem.network/?ref=whackd.in"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Pontem is an Experimental Network for Diem</div><div class="kg-bookmark-description">Pontem is an Experimental Network for the Diem Blockchain</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://assets-global.website-files.com/60536b901b879c2f395d75d0/60536b911b879c25ed5d75f9_Frame%2057.png" alt="Decentralized Applications would change Web Apps"></div></div><div class="kg-bookmark-thumbnail"><img src="https://i.imgur.com/nVshRMt.png" alt="Decentralized Applications would change Web Apps"></div></a></figure><p>Recently, <strong>Twitter</strong> has announced their Social Media network project called &apos;<strong>BlueSky</strong>&apos;. Twitter has its own share of controversies to Block or Expand thoughts or Opinions based on its own policies. People with followers also gets paid to make tweets.</p><p><u>Being Decentralized would allow tweets to be available in restricted areas of world even if network is restricted if they are part of blockchain network</u>.</p><p>Also, any content now could have a value; only those who want to own or view, pay and get that content data as a value to them.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.theverge.com/2021/8/16/22627435/twitter-bluesky-lead-jay-graber-decentralized-social-web?ref=whackd.in"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Twitter&#x2019;s decentralized social network project finally has a leader</div><div class="kg-bookmark-description">Bluesky, the decentralized social network project funded by Twitter, will be led by Jay Graber, the creator of Happening. Graber announced the new position alongside the news that Bluesky is hiring its first developers.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://cdn.vox-cdn.com/uploads/chorus_asset/file/7395351/android-chrome-192x192.0.png" alt="Decentralized Applications would change Web Apps"><span class="kg-bookmark-author">The Verge</span><span class="kg-bookmark-publisher">Ian Carlos Campbell</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://cdn.vox-cdn.com/thumbor/Sm2RiKrqboGlQo8W-lvgTBwLaCw=/186x20:1610x766/fit-in/1200x630/cdn.vox-cdn.com/uploads/chorus_asset/file/22786033/Screen_Shot_2021_08_16_at_12.06.33_PM.png" alt="Decentralized Applications would change Web Apps"></div></a></figure><p><strong>Tiktok</strong> recently integrated <strong>Audius</strong> as music-streaming platform. So, in turn short videos or funny memes created would have a monetary value which could be possibly purchased or re-sold in the marketplace as a digital content.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://cryptobriefing.com/music-streaming-platform-audius-surges-143-tiktok-integration/?ref=whackd.in"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Music Streaming Platform Audius Surges 143% on TikTok Integration | Crypto Briefing</div><div class="kg-bookmark-description">Crypto-based music platform Audius has partnered with TikTok, catapulting the AUDIO token up 143% in less than 24 hours.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://static.cryptobriefing.com/wp-content/uploads/2020/02/02093517/ios-144.png" alt="Decentralized Applications would change Web Apps"><span class="kg-bookmark-author">Crypto Briefing</span><span class="kg-bookmark-publisher">Timothy Craig</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://static.cryptobriefing.com/wp-content/uploads/2021/08/17042647/tiktok-crypto-streaming-platform-cover-768x403.png" alt="Decentralized Applications would change Web Apps"></div></a></figure><p>Upcoming changes are visible and would hit the internet soon. These changes would make WebApps and developer market change too in terms of skills. Marketing of products could also see this change; as digital marketing could sell or re-sell their ads via platforms.</p><p>Most importantly, creators if lost their value on a platform would still be able to hold their work and share it on different platform. This should be the case however, big tech giants might not allow this.</p><p>So, if you are a content creator for <strong>Tiktok, </strong>and its gets a banned in your country, you might not have to cry about it &#x1F600;<strong>. There is a possibility of open-sourced network to allow platform based transitions like this (looking at you Indian Startups).</strong></p>]]></content:encoded></item><item><title><![CDATA[Generating Github Like Identicons]]></title><description><![CDATA[<p>Github introduced user display images as <a href="https://github.blog/2013-08-14-identicons/?ref=whackd.in"><strong><em>Identicons</em></strong></a> way back in 2013. Github&apos;s Identicons are simple <code>5x5</code> pixel colored image displaying identity of a user into his display picture.</p><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2021/02/image-4.png" class="kg-image" alt loading="lazy" width="2000" height="658" srcset="https://whackd.in/content/images/size/w600/2021/02/image-4.png 600w, https://whackd.in/content/images/size/w1000/2021/02/image-4.png 1000w, https://whackd.in/content/images/size/w1600/2021/02/image-4.png 1600w, https://whackd.in/content/images/2021/02/image-4.png 2384w" sizes="(min-width: 720px) 720px"></figure><p>Initially <strong><em>Identicons</em></strong> were mostly recognized online as <strong><em>Gravatar</em></strong>, short for Globally recognized avatar, sometimes also regarded as digital-fingerprint, is an</p>]]></description><link>https://whackd.in/generating-github-like-identicons/</link><guid isPermaLink="false">624866e20c979a0e7a1019c8</guid><category><![CDATA[programming]]></category><dc:creator><![CDATA[Rohit Pal]]></dc:creator><pubDate>Sun, 21 Feb 2021 19:48:19 GMT</pubDate><media:content url="https://whackd.in/content/images/2021/02/identicon-banner-4.png" medium="image"/><content:encoded><![CDATA[<img src="https://whackd.in/content/images/2021/02/identicon-banner-4.png" alt="Generating Github Like Identicons"><p>Github introduced user display images as <a href="https://github.blog/2013-08-14-identicons/?ref=whackd.in"><strong><em>Identicons</em></strong></a> way back in 2013. Github&apos;s Identicons are simple <code>5x5</code> pixel colored image displaying identity of a user into his display picture.</p><figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2021/02/image-4.png" class="kg-image" alt="Generating Github Like Identicons" loading="lazy" width="2000" height="658" srcset="https://whackd.in/content/images/size/w600/2021/02/image-4.png 600w, https://whackd.in/content/images/size/w1000/2021/02/image-4.png 1000w, https://whackd.in/content/images/size/w1600/2021/02/image-4.png 1600w, https://whackd.in/content/images/2021/02/image-4.png 2384w" sizes="(min-width: 720px) 720px"></figure><p>Initially <strong><em>Identicons</em></strong> were mostly recognized online as <strong><em>Gravatar</em></strong>, short for Globally recognized avatar, sometimes also regarded as digital-fingerprint, is an image which you could generate that would recognize you to any website as an introduction with some basic details.</p><p><strong>Gravatar </strong>was a service which would let you create a profile with details like your <em>email</em>, <em>image</em> and few basic details and would give you a url which would let you display or embed profile details in any website.  </p><p><strong>Algorithm</strong> takes email as user identity (or IP Address) and create a <code>MD5</code> hash and then <code>hexify</code> that to generate a string which as a url would recognize the user.</p><p>The most basic image request URL looks like this: <code>https://www.gravatar.com/avatar/{HASH}</code></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://whackd.in/content/images/2021/02/image-5.png" class="kg-image" alt="Generating Github Like Identicons" loading="lazy" width="564" height="85"><figcaption><span style="white-space: pre-wrap;">Gravatars on Stackoverflow</span></figcaption></figure><p>where HASH is replaced with the <a href="https://en.gravatar.com/site/implement/hash/?ref=whackd.in"><em>calculated hash</em></a> for the specific email address you are requesting. This URL generated could be used as an <code>IMG</code> tag to display user image:</p><pre><code>https://www.gravatar.com/avatar/205e460b479e2e5b48aec07710c08d50
</code></pre>
<p>In this post, we would generate user <strong><em>identicon </em></strong>in <strong><em>Github</em></strong>&apos;s style; which could be used further to generate user images to identicons as service, into your projects.</p><h3 id="github-identicon-algorithm">Github Identicon Algorithm</h3><p>Take a <em>user identifier</em> (say email), <u>hash</u> it <code>SHA1</code> and then <u>convert it to hex binary</u>. Then in order to create a symmetric image rotate it and then mirror the matrix.</p><p>Input =&gt; <em><a href="mailto:someone@email.com">someone@email.com</a></em> (User Identity)</p>
<pre><code>fn MD5Hex(_) =&gt; 23adaae0eafc12761c29d920c5da1aa8
// then
fn toInt16(_) =&gt; 47424713038463833496632231788185000616
// then
fn toBinary15Limit(_) =&gt; 001101010101000
</code></pre>
<figure class="kg-card kg-image-card"><img src="https://whackd.in/content/images/2021/02/image-8.png" class="kg-image" alt="Generating Github Like Identicons" loading="lazy" width="803" height="401" srcset="https://whackd.in/content/images/size/w600/2021/02/image-8.png 600w, https://whackd.in/content/images/2021/02/image-8.png 803w" sizes="(min-width: 720px) 720px"></figure><p>Resultant matrix can be used to draw an image with color pixel on position where pixel value is <code>1</code>. If you have noticed, we have just expressed the user identifier in a <code>2D domain</code>.</p><h3 id="generating-identicon-matrix">Generating Identicon Matrix</h3><p>Now we can use any drawing library like <code>Canvas</code> in <strong><em>NodeJS</em></strong> or <code>PIL</code>(Pillow) in <strong><em>Python</em></strong> to draw the <strong><em>identicon</em></strong>. Let&apos;s do this via NodeJS Canvas API. Here&apos;s the snippet to generate image from matrix.</p><pre><code>const { createCanvas } = require(&apos;canvas&apos;);
const fs = require(&apos;fs&apos;);

const SIZE = 350;
const matrix = [
    [0, 1, 0, 1, 0],
    [0, 0, 1, 0, 0],
    [1, 1, 0, 1, 1],
    [1, 0, 0, 0, 1],
    [0, 1, 0, 1, 0]];
const canvas = createCanvas(SIZE, SIZE);

const ctx = canvas.getContext(&apos;2d&apos;);
ctx.fillStyle = &apos;#f2f2f2&apos;;
ctx.fillRect(0, 0, canvas.width, canvas.height);

for (let i = 0; i &lt; matrix.length; i++) {
    for (let j = 0; j &lt; matrix[i].length; j++) {
        if (matrix[i][j] === 1) {
            ctx.fillStyle = &apos;#03a4f4&apos;;
            ctx.fillRect((j + 1) * 50, (i + 1) * 50, 50, 50);
        }
    }
}

fs.writeFileSync(&apos;out.png&apos;, canvas.toBuffer());
</code></pre>
<figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://whackd.in/content/images/2021/02/image-14.png" class="kg-image" alt="Generating Github Like Identicons" loading="lazy" width="350" height="350"><figcaption><span style="white-space: pre-wrap;">Generated output via Canvas API</span></figcaption></figure><p><br>Hope you liked the post. Cheers!</p>]]></content:encoded></item><item><title><![CDATA[Worker Threads in NodeJS]]></title><description><![CDATA[NodeJS for its design engine was preferred to be used as an I/O performant and not as a CPU performant backend. ]]></description><link>https://whackd.in/worker-threads-in-nodejs-part-1/</link><guid isPermaLink="false">624866e20c979a0e7a1019c5</guid><category><![CDATA[programming]]></category><category><![CDATA[nodejs]]></category><category><![CDATA[javascript]]></category><dc:creator><![CDATA[Rohit Pal]]></dc:creator><pubDate>Fri, 19 Feb 2021 12:27:15 GMT</pubDate><media:content url="https://whackd.in/content/images/2021/02/Blog-Header-1200x600-px--1-.png" medium="image"/><content:encoded><![CDATA[<img src="https://whackd.in/content/images/2021/02/Blog-Header-1200x600-px--1-.png" alt="Worker Threads in NodeJS"><p>Created by <a href="https://en.wikipedia.org/wiki/Ryan_Dahl?ref=whackd.in">Ryan Dahl</a> NodeJS first came into being in May 27, 2009, NodeJS is traditionally seen as a single threaded asynchronous engine. Single threaded async engine created from Chrome V8&apos;s engine that could perform as a backend engine.</p><p>For its design, the V8 engine was preferred to be used as an I/O performant and not as a CPU performant backend. However, NodeJS Team has been working towards making NodeJS to work with threads too. <br><br>Even in Web Browsers, <a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers?ref=whackd.in">Web Worker API</a> provides a way to create background task that could be run simultaneously from browser <code>window</code> scope so that it wouldn&apos;t block single-threaded engines when we would like to perform a CPU bound task. We would discuss Web Workers are a topic to be discussed in future posts. Let&apos;s get back to <strong><em>Worker Threads</em></strong>.</p><p>First <a href="https://nodejs.org/en/blog/release/v10.5.0/?ref=whackd.in">introduced to NodeJS</a> in version <code>v10.5.0</code> with experimental feature, <strong><em>Worker Threads</em></strong>, with version <code>v12 LTS</code> API became stable and ready to use in production.</p><h2 id="basic-introduction-to-api">Basic Introduction to API</h2><p>Worker Threads can be imported to NodeJS as simple as </p><pre><code>const worker = require(&apos;worker_threads&apos;);
</code></pre>
<p>And then, create a <strong><em>Worker Thread Object</em></strong> as</p><pre><code>const worker = new Worker(__filename)
</code></pre>
<p>The <code>Worker</code> class represents an independent JavaScript execution thread. The <code>__filename</code> can be the same javascript file or a target javascript file to load with new <code>thread</code> created.</p><p>The <code>Worker</code> Object also accepts other values in its constructor one of which is <code>workerData</code>. Let&apos;s see an example of a script that would create a new thread from its main thread and can process initialised data via workerData passed inside the constructor.</p><pre><code>const {
  Worker, isMainThread, parentPort, workerData
} = require(&apos;worker_threads&apos;);

if (isMainThread) {
  const worker = new Worker(__filename, {
    workerData: content
  });
} else {
  const { lib } = require(&apos;some-js-library&apos;);
  
  // workerData available to be processed
}
</code></pre>
<p><code>isMainThread</code> tells the code that if the code is running in main thread or not. Here, we have created the worker instance and passed some content to its constructor via <code>workerData</code>. Since worker is create with same file location it would be also be loaded via Worker Thread inside a new thread and pass content data would be available to us in thread via <code>workerData</code> value imported.</p><h2 id="inter-thread-communication">Inter-Thread Communication</h2><p><strong><em>Two-way communication</em></strong> can be achieved through inter-thread message passing. Worker Threads supported <code>MessageChannel</code> which exposes <code>MessagePort</code> objects as <code>port1</code> and <code>port2</code> for sending and receiving. Here is a code snippet.</p><pre><code>const assert = require(&apos;assert&apos;);
const {
  Worker, MessageChannel, MessagePort, isMainThread, parentPort
} = require(&apos;worker_threads&apos;);

if (isMainThread) {
  const worker = new Worker(__filename);
  const channel = new MessageChannel();
  
  worker.postMessage({ hereIsYourPort: channel.port1 }, [channel.port1]);
  channel.port2.on(&apos;message&apos;, (value) =&gt; {
    console.log(&apos;received:&apos;, value);
  });
} else {
  parentPort.once(&apos;message&apos;, (value) =&gt; {
    assert(value.hereIsYourPort instanceof MessagePort);
    value.hereIsYourPort.postMessage(&apos;the worker is sending this&apos;);
    value.hereIsYourPort.close();
  });
}
</code></pre>
<p>The API is very simple and easy to understand on its own. Since concept of threads comes with a lot of caveats we would discuss those in following blog posts. </p><p>Next, we would analyse a use case for task break down into threads and see how we can improve performance of certain CPU intensive tasks via <strong><em>Worker Threads</em></strong> in NodeJS.</p><p>Stay Tuned...</p>]]></content:encoded></item></channel></rss>