<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Aditya Sharma]]></title><description><![CDATA[Building crazy open-source collaborative projects. Checkout my best projects here: https://github.com/adityasharma-tech]]></description><link>https://blogs.adityasharma.tech</link><generator>RSS for Node</generator><lastBuildDate>Fri, 17 Apr 2026 12:10:15 GMT</lastBuildDate><atom:link href="https://blogs.adityasharma.tech/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[The Missing Guide: Android Screen Audio Streaming with WebRTC (React-Native, JNI, C++ ADM, WebRTC Build)]]></title><description><![CDATA[Status: Work in ProgressLast Updated: 26-11-2025

There is no docs or API available for webrtc_android source code, that’s why it took me months to research. Still searching across the internet to send custom AudioRecord data samples to the native we...]]></description><link>https://blogs.adityasharma.tech/the-missing-guide-android-screen-audio-streaming-with-webrtc-react-native-jni-c-adm-webrtc-build</link><guid isPermaLink="true">https://blogs.adityasharma.tech/the-missing-guide-android-screen-audio-streaming-with-webrtc-react-native-jni-c-adm-webrtc-build</guid><category><![CDATA[react-native-webrtc]]></category><category><![CDATA[WebRTC]]></category><category><![CDATA[React Native]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[Android]]></category><category><![CDATA[JNI Integration]]></category><dc:creator><![CDATA[Aditya Sharma]]></dc:creator><pubDate>Tue, 25 Nov 2025 19:30:35 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1764099018408/e749ec2e-1424-4b2d-a53d-98ca483677b6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Status:</strong> Work in Progress<br /><strong>Last Updated:</strong> 26-11-2025</p>
<blockquote>
<p>There is no docs or API available for webrtc_android source code, that’s why it took me months to research. Still searching across the internet to send custom AudioRecord data samples to the native webrtc pipeline.</p>
</blockquote>
<p>Around a month ago, I had to build a simple react-native application to transmit the audio capturing while recording the screen to the webrtc in LAN only. I should be fairly simple, and my goal was to share the live screen audio capture to play on all other devices connected to it in realtime over LAN using webrtc.</p>
<p>But it was not going to be good. In <code>react-native-webrtc</code> the <code>navigator.mediaDevices.getDisplayMedia()</code> API only gives VideoTrack of the screen video capturing, not the AudioTrack of the screen audio capturing. So, I tried diffrent libraries; But it is not that simple even if you get the Audio it is not giving you the AudioTrack, means what the webrtc wants. So, I opened the source code of react-native-webrtc, there is no screen audio capturing service used, means I can’t send screen audio data through react-native-webrtc.</p>
<p>While searching how other companies like Discord, <a target="_blank" href="https://audiorelay.net">AudioRelay</a>, &amp; most important <a target="_blank" href="https://github.com/ant-media">AntMedia</a> are able to share audio with webrtc on android, where AntMedia have a <a target="_blank" href="https://github.com/ant-media/WebRTC-Android-SDK/tree/master/webrtc-android-framework/">React-Native-WebRTC-SDK</a> where I found something related to webrtc as they build the webrtc custom build and copied the org/webrtc to their application and also the libjingle_peerconnection_so.so. But still find nothing there right for me. And I am sure that they all building custom cpp implementation of webrtc and AudioPlaybackCaptureConfig.</p>
<p>Then, I thought to implement my own track to the react-native-webrtc by captuing audio from the screen and make the Java-Typescript API for react-native. First we have to initiate the screen recording and ask permission using the <a target="_blank" href="https://developer.android.com/reference/android/media/projection/MediaProjection">MediaProjection</a> Android API and using the <a target="_blank" href="https://developer.android.com/guide/topics/media/playback-capture">AudioPlaybackCapture</a> API we can capture screen audio which depends on MediaProjection token while getting the configuration AudioPlaybackCaptureConfiguration. And yes, I am able to capture the audio and getting it in <code>AudioRecord</code> class, from where I am getting the audio PCM samples to do whatever I want.</p>
<p>Now I am creating a method, <code>createScreenAudioTrack</code> to create a track inside the react-native-webrtc library, And yes this is working, we are getting a new <code>AudioTrack</code> type track with <code>VideoTrack</code> in react-native code. But we can't directly pass audio data received from the <code>AudioPlayBackConfiguration</code> and also the <code>AudioPlayBackConfiguration</code> is the region why we need the <code>mediaProjection</code> Intent. I have the incomming audio now and the track, But to send data to the webrtc track, it is a bit difficult, because we can only do by creating a new <a target="_blank" href="https://getstream.github.io/webrtc-android/stream-webrtc-android/org.webrtc.audio/-audio-device-module/index.html">AudioDeviceModule</a> specifically for the <strong>ScreenCaptureAudio</strong>. But it will collide with the previous <strong>MicrophoneAudio</strong> Module which is by default PCM for the webrtc. which will remove the microphone audio, we will not get any audio in the microphone track then. But first let's see even I can create a <a target="_blank" href="https://getstream.github.io/webrtc-android/stream-webrtc-android/org.webrtc.audio/-audio-device-module/index.html"><em>AudioDeviceModule</em></a> or not.</p>
<pre><code class="lang-java">AudioPlaybackCaptureConfiguration config = <span class="hljs-keyword">new</span> AudioPlaybackCaptureConfiguration.Builder(mediaProjection)
    .addMatchingUsage(AudioAttributes.USAGE_MEDIA)
    .build();
AudioFormat format = <span class="hljs-keyword">new</span> AudioFormat.Builder()
    .setEncoding(AudioFormat.ENCODING_PCM_16BIT)
    .setSampleRate(<span class="hljs-number">48000</span>)
    .setChannelMask(AudioFormat.CHANNEL_IN_MONO)
    .build();
AudioRecord ar = <span class="hljs-keyword">new</span> AudioRecord.Builder()
    .setAudioFormat(format)
    .setBufferSizeInBytes(bufferSize)
    .setAudioPlaybackCaptureConfig(config)
    .build();

<span class="hljs-keyword">byte</span>[] buf = <span class="hljs-keyword">new</span> <span class="hljs-keyword">byte</span>[bufferSize];
<span class="hljs-keyword">int</span> r = ar.read(buf, <span class="hljs-number">0</span>, buf.length);
</code></pre>
<p>I can’t create a <a target="_blank" href="https://getstream.github.io/webrtc-android/stream-webrtc-android/org.webrtc.audio/-audio-device-module/index.html">AudioDeviceModule</a> simply in Java. I have to create it in native C++ and use the <a target="_blank" href="https://developer.android.com/ndk/guides/jni-tips">JNI bridge</a> to talk between Java &amp; C++ native. So, I created a simple C++ Module in android and used JNI bridge to call some cpp methods and get data in return. Now it’s time to create custom ADM (AudioDeviceModule) in cpp for webrtc. But for that I have to compile whole chromium webrtc source code for android cpp and then link libraries and directories to the CMake file of the android. First I have to install chromium <a target="_blank" href="https://www.chromium.org/developers/how-tos/install-depot-tools/">depot_tools</a> to use tools like <code>gclient</code>, <code>fetch</code>, <code>git-cl</code> to fetch chromium webrtc repositories.</p>
<pre><code class="lang-cpp"><span class="hljs-keyword">extern</span> <span class="hljs-string">"C"</span> <span class="hljs-function">JNIEXPORT <span class="hljs-keyword">void</span> JNICALL
<span class="hljs-title">Java_com_example_ScreenAudio_nativeFeedFrame</span><span class="hljs-params">(JNIEnv* env, jobject thiz, jbyteArray data)</span> </span>{
  jbyte* samples = env-&gt;GetByteArrayElements(data, <span class="hljs-literal">NULL</span>);
  env-&gt;ReleaseByteArrayElements(data, samples, <span class="hljs-number">0</span>);
}
</code></pre>
<p>Followed this documentation to compile webrtc source code on linux_x86 for webrtc_android which almost took a whole day to clone, sync &amp; compile:</p>
<p><a target="_blank" href="https://webrtc.github.io/webrtc-org/native-code/android/">https://webrtc.github.io/webrtc-org/native-code/android</a></p>
<p>While few arguments are missing in the documentation while building for android, which I understand later on. WebRTC android build setup is ready now, I have to copy the so files in libs folder and the webrtc build code inside the cpp/webrtc/include and link the directories and files using the CMake configuration.</p>
<blockquote>
<p>For anyone interested, I keep a public repo with my WebRTC C++/JNI experiments here: <a target="_blank" href="https://github.com/adityasharma-tech/webrtc-cpp-learning">https://github.com/adityasharma-tech/webrtc-cpp-learnin</a>g <em>(My more complete work lives in private repos, but this one is for sharing the basics I test and learn.)</em></p>
</blockquote>
<h3 id="heading-heres-few-resources-took-me-months-to-find">Here’s few resources took me months to find:</h3>
<ul>
<li><p><a target="_blank" href="https://docs.oracle.com/javase/8/docs/technotes/guides/jni/spec/jniTOC.html">https://docs.oracle.com/javase/8/docs/technotes/guides/jni/spec/jniTOC.html</a></p>
</li>
<li><p><a target="_blank" href="https://webrtc.googlesource.com/src/%2B/HEAD/modules/audio_device/g3doc/audio_device_module.md">https://webrtc.googlesource.com/src/%2B/HEAD/modules/audio_device/g3doc/audio_device_module.md</a></p>
</li>
<li><p><a target="_blank" href="https://gist.github.com/mysteryjeans/dfddbf73ab232fd3ef17c51d3b38433d">https://gist.github.com/mysteryjeans/dfddbf73ab232fd3ef17c51d3b38433d</a></p>
</li>
<li><p><a target="_blank" href="https://commondatastorage.googleapis.com/chrome-infra-docs/flat/depot_tools/docs/html/depot_tools_tutorial.html#_setting_up">https://commondatastorage.googleapis.com/chrome-infra-docs/flat/depot_tools/docs/html/depot_tools_tutorial.html#_setting_up</a></p>
</li>
<li><p><a target="_blank" href="https://webrtc.github.io/webrtc-org/architecture/#webrtc-native-c-api">https://webrtc.github.io/webrtc-org/architecture/#webrtc-native-c-api</a></p>
</li>
<li><p><a target="_blank" href="https://w3c.github.io/webrtc-pc/#simple-peer-to-peer-example">https://w3c.github.io/webrtc-pc/#simple-peer-to-peer-example</a></p>
</li>
<li><p><a target="_blank" href="https://webrtc.googlesource.com/src/%2B/refs/heads/lkgr/api/g3doc/index.md">https://webrtc.googlesource.com/src/%2B/refs/heads/lkgr/api/g3doc/index.md</a></p>
</li>
<li><p><a target="_blank" href="https://webrtc.googlesource.com/src/+/master/native-api.md">https://webrtc.googlesource.com/src/+/master/native-api.md</a></p>
</li>
<li><p><a target="_blank" href="https://webrtc.googlesource.com/src/+/refs/heads/main/examples/androidnativeapi/jni/">https://webrtc.googlesource.com/src/+/refs/heads/main/examples/androidnativeapi/jni</a></p>
</li>
<li><p><a target="_blank" href="https://webrtc.googlesource.com/src/+/refs/heads/main/api/">https://webrtc.googlesource.com/src/+/refs/heads/main/api</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Let AI handle my phone calls using bluetooth]]></title><description><![CDATA[Building What I Want — As Always
So, I’ve wanted this to build this for a long time. And I tried so many times, but couldn’t — because of Android (yes Android, I mean the android security for calls). The problem is that you have no control over the m...]]></description><link>https://blogs.adityasharma.tech/let-ai-handle-my-phone-calls-using-bluetooth</link><guid isPermaLink="true">https://blogs.adityasharma.tech/let-ai-handle-my-phone-calls-using-bluetooth</guid><category><![CDATA[raspberry pi 4]]></category><category><![CDATA[Android]]></category><dc:creator><![CDATA[Aditya Sharma]]></dc:creator><pubDate>Tue, 01 Jul 2025 17:29:14 GMT</pubDate><content:encoded><![CDATA[<hr />
<p><strong>Building What I Want — As Always</strong></p>
<p>So, I’ve wanted this to build this for a long time. And I tried so many times, but couldn’t — because of Android (yes Android, I mean the android security for calls). The problem is that you have no control over the microphone or audio going thoughout the call. Sure, you can hang up or place a call. You can’t also create virtual sinks or sources in android like usually we do in linux.</p>
<p>But one most interesting thing is Google always listen your calls for, so what they call <strong>advertising purposes</strong> ✌🏼, yes I tested it. Anyway, I don’t know how they do it, and in which corner they put these policy. Just for fun I also <code>git clone</code> <strong>AOSP (Android Open Source Project)</strong> on my pc and it installed all libraries and that was around 300+GB. I view the code, gain some and exit.</p>
<p>Yes, you can also use <a target="_blank" href="https://www.twillio.com">twillio</a> or <a target="_blank" href="https://sinch.com">sinch</a> like services &amp; directly use their apis, But I wanna use some cheap on-device, full access and just can’t afford it continuously for years.</p>
<p>Also, In <strong>India</strong> <a target="_blank" href="https://truecaller.com">truecaller</a> provides AI Assistant features which will talk on behalf of you to spammers and may ring or hang on the call. Good feature but still they collaborated with telecom companies to use their apis. Again we can’t afford that.</p>
<p>I tried 40+ times again &amp; again in different ways gain more access, that’s when I found about call-screening but only for system apps and the default dialer app. After making a simple custom dialer, I got a lot of permission related to calls. But, still I can’t control audio/mic.</p>
<p>I am more interested in hardware than softwares, I worked with electronics and IoT’s. So, I’ve got an idea, Let’s take out the part of android we don’t have access to. While you are on a call and connected to a headphone, You can receive audio from speakers and send though microphone &amp; that’s all I want.</p>
<p>And yes, I was also trying this but, I just realise, we can simply send data to an bluetooth IoT module as it act's like a headphone. But none of them worked, I tried many bluetooth modules. But, in bluetooth there are profiles, like hands-free profile <strong>HFP, HDP, HSP.</strong> which are mostly not available on IoTs. But now I have got an <code>raspberrypi 4B</code> Model with 4gb RAM, A whole tiny linux machine. Has full support over all bluetooth profiles, and I can write whole code on it. So, now I have to definetly do this.</p>
<p>Now, I almost got everything I need and it worked for me. So what I’ve done is set the bluetooth profile of <code>raspberrypi 4</code> to act like a headphone for my android device. And only enabled the Phone calls option and disabled the Media Sharing on my android device for that <strong>rpi4</strong>. That’s not it, now my phone is using <strong>raspberrypi 4</strong> microphone input as call’s input and <strong>raspberrypi 4</strong> output as call’s output. I learned about <code>virtual sinks</code> and <code>virtual sources</code> on linux.</p>
<p>Here’s the diagram explaining virtual sinks and sources, that’s how they loopback and work.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751389448965/3a4f9d95-4799-44f3-a4e4-e43eb80ed127.png" alt class="image--center mx-auto" /></p>
<p>So, I have to first create a virtual mic, you can create one with this command:</p>
<pre><code class="lang-bash">pactl load-module module-null-sink sink_name=VirtualMic sink_properties=device.description=VirtualMic
</code></pre>
<p>So, when you create a virtual mic (source) using pulseaudio audio server, it will also create a <strong>Null Sink</strong> (Blank Virtual Output Device) because how will you tell what to play in the virtual mic you created? you have to play audio into the Null Sink simply changing the output and it will loopback and go to the created linked source. And I set the default mic (source) to the virtual mic using <code>pactl set-default-source VirtualMic.monitor</code> Now playing some audio and getting a call on my connected phone will play the same audio to the incomming/outgoing call. And recording audio is so easy, you can also create a VIrtual speaker so that it won’t play in the speaker. Now, you can get the input and can send output.</p>
<p>There are a lot of models availabe on <a target="_blank" href="https://huggingface.com">huggingface</a> or directly use their spaces to use <strong>TTS (text-to-speech) or STT (speech-to-text)</strong> features. And it seems all done, right?</p>
<p>No, we have something left. How did your AI or the script knows whose call is these? The call details? How will the receive or hang up calls? I don’t want my AI to talk like a chatbot. No, I don’t want that.</p>
<p>There is something knows as <strong>AT commands</strong> inbuilt in bluetooth core. that’s how bluetooth communicate between devices to send commands between them. Commands are available according to the profiles they have. So, I tried to listen for AT commands but I can only see it in terminal I can't access it in structured format. And I decided to create an application to manually send commands over <code>RFCOMM</code> in bluetooth, so I added the bluetooth service inside my own android app (I developed an app for myself whatever I need I just add it in).  I created a whole bluetooth server, retry mechanism and write commands, lot of mess for with permission in android, really it is too dificult to do things with calling, I can't access phoneNumber, i want to send it though bluetooth the the python RFCOMM server running on raspberry pi 4, So, I used a deprecated feature of android and it works, everything just works, but crashes few times.</p>
<p>But, now I am retrying the mechanism to listen AT commands directly and I don't have to build the android app. Just trying, here ofono is helping me but let's see what happen. Will update when completed.</p>
]]></content:encoded></item><item><title><![CDATA[PRD: Real-Time local data plotter for IoT devices]]></title><description><![CDATA[Author: Aditya Sharma
Overview
A realtime local data visualization tool from local IoT devices, this allows IoT developers to send custom raw data format numeric data to visualize using graphs with very very low letency. It also provide for logging m...]]></description><link>https://blogs.adityasharma.tech/prd-real-time-local-data-plotter-for-iot-devices</link><guid isPermaLink="true">https://blogs.adityasharma.tech/prd-real-time-local-data-plotter-for-iot-devices</guid><category><![CDATA[ChaiCode]]></category><category><![CDATA[iot]]></category><category><![CDATA[iot project]]></category><category><![CDATA[Graph]]></category><dc:creator><![CDATA[Aditya Sharma]]></dc:creator><pubDate>Tue, 28 Jan 2025 07:31:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738049065388/1e103bc4-8a04-49e5-aa23-006ef413500b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Author:</strong> Aditya Sharma</p>
<h2 id="heading-overview">Overview</h2>
<p>A realtime local data visualization tool from local IoT devices, this allows IoT developers to send custom raw data format numeric data to visualize using graphs with very very low letency. It also provide for logging messages directly to your dashbord. It supports connections via <strong>Serial (USB)</strong>, <strong>WiFi</strong>, or <strong>Bluetooth</strong>. User just need to throw data to any port they want with any connection method and dashboard will stucture it automatically with saving your all data locally or on cloud(future feature).</p>
<h2 id="heading-objective">Objective</h2>
<ol>
<li><p>Real-Time data display</p>
</li>
<li><p>Local setup</p>
</li>
<li><p>Very low letency</p>
</li>
<li><p>Interactive visualisation</p>
</li>
<li><p>Supports any platfom having a browser</p>
</li>
</ol>
<h2 id="heading-problem">Problem</h2>
<p>Enginners &amp; mostly for Hobbyists have to do whole setup even for a very simple project and after that they don’t even get a better results &amp; modern data visualization with high latency.</p>
<h2 id="heading-persona">Persona</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Developers</strong></td><td>System developers working with some IoT devices need quick and easy data visualization and don’t want to mess with whole cloud stuff</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Engineers</strong></td><td>Same as developers using different types of sensors getting data saving their data and showing their data in a readable format</td></tr>
<tr>
<td><strong>Hobbyists</strong></td><td>DIY enthusiasts like me working with ESP32 and looking for a modern way to visualize data locally</td></tr>
</tbody>
</table>
</div><h2 id="heading-feature-in">Feature In</h2>
<ul>
<li><p>local CSV out support</p>
</li>
<li><p>real-time data output</p>
</li>
<li><p>modern graphs</p>
</li>
<li><p>messages, error logs support</p>
</li>
<li><p>smooth data visualisation</p>
</li>
<li><p>support any connection (Serial / Bluetooth / Wifi)</p>
</li>
<li><p>multiple ports visualization at same time</p>
</li>
</ul>
<h2 id="heading-feature-out">Feature Out</h2>
<ul>
<li><p>May be support mobile (or having problems with ports)</p>
</li>
<li><p>May be cloud support in future</p>
</li>
<li><p>Try not gonna build my own graph library</p>
</li>
</ul>
<h2 id="heading-design">Design</h2>
<p>Basic outline of how our platform will look like how your can visualize your data locally:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738043232686/b83182e6-36a5-4f15-9634-82ea3a8219f6.png" alt class="image--center mx-auto" /></p>
<p><strong>Graph:</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738043808392/0cff9b46-1b23-430d-9b7e-9f34060ac254.png" alt class="image--center mx-auto" /></p>
<p><strong>Communication:</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738048541299/96b5c768-7549-480a-b809-c0a168ca0c41.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-technical-considerations"><strong>Technical Considerations</strong></h2>
<p>Libraries and framework we will use to build this product:</p>
<ul>
<li><p>ReactJS</p>
</li>
<li><p>Material UI</p>
</li>
<li><p>SerialPort</p>
</li>
<li><p>@abandonware/noble</p>
</li>
<li><p>axios</p>
</li>
</ul>
<h2 id="heading-gtm-approach"><strong>GTM Approach</strong></h2>
<p><strong>Tagline:</strong> Seamless Device Connectivity and Real-Time Data Visualization at Your Fingertips.</p>
<p><strong>Key messages:</strong></p>
<ol>
<li><p>Our platform bridges the gap between your hardware and the web, enabling real-time data transfer and visualization via serial, Bluetooth, or WiFi.</p>
</li>
<li><p>Supports wide range of divices both for visualization &amp; for IoT device like <strong>Raspberry Pi</strong>, <strong>ESP32</strong>, <strong>Arduino</strong>, <strong>BLE Sensors</strong> and more.</p>
</li>
<li><p>Multi-channel support (Serial, Bluetooth, WiFi) to fill the communication needs.</p>
</li>
</ol>
<p><strong>Differentiators:</strong></p>
<ol>
<li><p>No more complex setups - just connect your devices as you want and start visualizing your data.</p>
</li>
<li><p>Cross platform support - every device has a browser no additional tool need or setup installs.</p>
</li>
<li><p>100% Realtime - It is running locally so, will be true realtime.</p>
</li>
</ol>
<p><strong>Launch:</strong></p>
<ol>
<li><p>Launch on <em>Product Hunt</em> to reach tech guys.</p>
</li>
<li><p>Share on social media &amp; be in creators sights.</p>
</li>
<li><p>This product will totally open-source and free for everyone.</p>
</li>
</ol>
<h2 id="heading-open-issues">Open Issues</h2>
<p>We still have to figure out to build a tiny library for IoT devices which will help to easily thow their data to ports while not to write whole handler for that.</p>
<h2 id="heading-feature-timeline-and-phasing"><strong>Feature Timeline and Phasing</strong></h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Feature</strong></td><td><strong>Status</strong></td><td><strong>Dates</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Frontend Development</td><td><mark>Done</mark></td><td>Jan 28 2025</td></tr>
<tr>
<td>Local Backend</td><td><mark>Done</mark></td><td>Jan 29 2025</td></tr>
<tr>
<td>IoT library</td><td><mark>Done</mark></td><td>-</td></tr>
<tr>
<td>Testing</td><td><mark>Done</mark></td><td>-</td></tr>
<tr>
<td>Product Hunt Launch</td><td><mark>Done</mark></td><td>-</td></tr>
</tbody>
</table>
</div>]]></content:encoded></item><item><title><![CDATA[How the data move around the whole internet #chaicode]]></title><description><![CDATA[Let’s start with networking; So, If we see the networking we basically have to transport data from one machine to another machine. There are 7 layers between transporting data from one machine to another machine which is knows as Model OSI.

Let’s un...]]></description><link>https://blogs.adityasharma.tech/how-the-data-move-around-the-whole-internet-chaicode</link><guid isPermaLink="true">https://blogs.adityasharma.tech/how-the-data-move-around-the-whole-internet-chaicode</guid><category><![CDATA[ChaiCode]]></category><dc:creator><![CDATA[Aditya Sharma]]></dc:creator><pubDate>Sun, 19 Jan 2025 07:08:47 GMT</pubDate><content:encoded><![CDATA[<p>Let’s start with networking; So, If we see the networking we basically have to transport data from one machine to another machine. There are 7 layers between transporting data from one machine to another machine which is knows as <strong>Model OSI</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736850895639/db041686-3ad8-4264-a9d3-b8cde948c6cd.png" alt class="image--center mx-auto" /></p>
<p>Let’s understand the layers one by one so we can understand it briefly:</p>
<ol>
<li><h3 id="heading-physical-layer">Physical layer</h3>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736853074085/4311f856-dcd0-49a1-b1ee-815b2e4e2785.png" alt class="image--center mx-auto" /></p>
<p> If you don’t know there are lot of connection cables laid on the seabed in the oceans which allow data to transfer from one to another continent and through the <em>Cable Landing Station (CLS)</em> they go throughout the whole continent and towers then it receives and send through mobiles towers to the data link layer. It’s basically the bits 1 &amp; 0’s which transfer in the form of light in the cables. <mark>So, how does these bits transfer into readable data? </mark> 😉 For this we have to move on to the data link layer.</p>
</li>
<li><h3 id="heading-data-link-layer">Data Link Layer</h3>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736853155518/aff216a1-9bac-483a-97e2-319bd99e38fd.png" alt class="image--center mx-auto" /></p>
<p> This layer is responsible for the delivery of data (frames) from one node to another node which also includes encoding, decoding and organizing the outgoing data, also major role of this layer is to make sure data is delivered error free. <mark>So, do know the answer how this layer knows where the data must be sent? 🤔</mark> Yes, It’s the mac address; As you know every device has their own unique mac address which this layer add in the headers of each frame and thus they know where to send it. let’s move on to the network layer.</p>
</li>
<li><h3 id="heading-network-layer">Network Layer</h3>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736853647652/1994b501-5506-49ef-af4d-c26fa314df5f.png" alt class="image--center mx-auto" /></p>
<p> Here your home routers maintain crucial role to build the network layer or you can say your smartphone because it basically your phone which connects directly to the cell tower. Hmm, the main purpose of the network layer is for maintaining the routing, forwarding and addressing the data; This layer manages addressing the data through IP addresses and handles packet forwarding(creation → transport → packets assembly).</p>
</li>
<li><h3 id="heading-transport-layer">Transport layer</h3>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736853827636/06a2d5f9-f46a-4efe-a633-dd2215f5eba9.png" alt class="image--center mx-auto" /></p>
<p> This is the interesting part because here <strong>we can control this layer.</strong> <mark>Wanna know how?</mark>🤨</p>
<p> This layer provides communication services for the application; It manages the error flow, full data transfer and based on your application services you can control the protocol you want. <strong>TCP</strong> (Transmission Control Protocol) and <strong>UDP</strong> (User Datagram Protocol) are the two default protocols which you can define to manage your data transfer. You want reliability, can choose TCP, if you want low latency you have UDP. Both have their pros and cons, or you can make your own protocol as Zoom. Let’s move on to the application layer.</p>
</li>
</ol>
<p>#chaicode</p>
]]></content:encoded></item><item><title><![CDATA[Brief overview about brightanalytics.in]]></title><description><![CDATA[Building a full-scale open-source collaborative analytics platform to help users to collect & monitor their traffic along various platforms. For now we only support web but from next month we will support both iPhone & Android devices. We also have f...]]></description><link>https://blogs.adityasharma.tech/brief-overview-about-brightanalyticsin</link><guid isPermaLink="true">https://blogs.adityasharma.tech/brief-overview-about-brightanalyticsin</guid><category><![CDATA[brightanalytics]]></category><category><![CDATA[analytics]]></category><category><![CDATA[Analytics  for Ecommerce]]></category><category><![CDATA[ analytics tools]]></category><category><![CDATA[Google Analytics]]></category><category><![CDATA[website traffic analytics]]></category><dc:creator><![CDATA[Aditya Sharma]]></dc:creator><pubDate>Sun, 12 Jan 2025 11:42:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1736681935468/156554de-cc88-4a7a-9e63-23f5c8af88c5.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Building a full-scale open-source collaborative analytics platform to help users to collect &amp; monitor their traffic along various platforms. For now we only support web but from next month we will support both iPhone &amp; Android devices. We also have future plans to monitor systems apps, I mean for windows &amp; for macOS it can be late.</p>
<h3 id="heading-building-frameworks-for-react-react-native-react-native-windows-amp-native">Building frameworks for react, react-native, react-native-windows &amp; native</h3>
<p>We have plans to build all the necessary frameworks to view your app in-sites from the core level, by this you can deeply connect to your app; frameworks will help you to integrate all your data-points by yourself. Frameworks will be optimize to not to harm your app performance.</p>
<hr />
<h2 id="heading-for-developers-who-want-to-contribute">For developers who want to contribute</h2>
<h3 id="heading-tech-stack-used-in-the-app">Tech stack used in the app:</h3>
<ul>
<li><p>nodejs (TypeScript)</p>
</li>
<li><p>Apache Kafka to handle millions of data</p>
</li>
<li><p>Apache Pulsar (very basic)</p>
</li>
<li><p>Redis DB for caching</p>
</li>
<li><p>Postgres DB (Will use Cassandra in future)</p>
</li>
<li><p>RabbitMQ for queues</p>
</li>
</ul>
<h3 id="heading-github-repo-amp-links">Github Repo &amp; Links:</h3>
<p>Here in the <a target="_blank" href="https://github.com/Bright-Analytic">Github Org</a> you can find all the open-source repos this platform, we are open to contribute now.</p>
<p>Visit on our homepage here: <a target="_blank" href="https://brightanalytics.in">brightanalytics.in</a>, frontend is also fully open-source.</p>
]]></content:encoded></item></channel></rss>