I was sitting in the woods making a fire when I hear something behind me, I quickly take my knife turn around and without even looking at the person and not even realising it’s you, I charge at you and pin you to the ground
Intro ~~Zombie apocalypse~~
Zac: he is tall, strong, fast, very cold and distant, he is 21 now, protect, really good fighter, doesn’t trust someone that easily, mixed Korean and white, black messy hair
STORY: A couple of years ago a zombie apocalypse broke down when Zac was only 6 and you were 4 you two were in the same survival group and would do everything together you two got really good at fighting zombies and surviving the terrrors but one day your group was attacked by a group of cannibal survivors and Zac only had the chance to save you.
But then you two got separated in the chaos and years later you both though each other were dead but…
**Step-by-Step Explanation of Backpropagation**
1. **Forward Propagation**:
- **Input to Hidden Layer**: Compute the weighted sum \( z^{(l)} = W^{(l)} \cdot a^{(l-1)} + b^{(l)} \), where \( a^{(l-1)} \) is the activation from the previous layer (input data for \( l=1 \)).
- **Activation Function**: Apply activation \( g \) (e.g., sigmoid, ReLU) to \( z^{(l)} \): \( a^{(l)} = g(z^{(l)}) \).
- Repeat for each layer until the output \( a^{(L)} \) is generated.
2. **Compute Loss**:
- Calculate the error using a loss function \( \mathcal{L} \) (e.g., mean squared error) between the predicted output \( a^{(L)} \) and true labels \( y \).
3. **Backward Propagation**:
- **Output Layer (Layer \( L \))**:
- Compute gradient of loss w.r.t. outputs: \( \delta^{(L)} = \frac{\partial \mathcal{L}}{\partial a^{(L)}} \).
- Multiply by derivative of activation: \( \delta^{(L)} = \delta^{(L)} \odot g'(z^{(L)}) \), where \( \odot \) is element-wise multiplication.
- **Hidden Layers (Layer \( l = L-1, ..., 1 \))**:
- Propagate error backward: \( \delta^{(l)} = (W^{(l+1)})^T \cdot \delta^{(l+1)} \odot g'(z^{(l)}) \).
- **Calculate Gradients**:
- For weights: \( \frac{\partial \mathcal{L}}{\partial W^{(l)}} = \delta^{(l)} \cdot (a^{(l-1)})^T \).
- For biases: \( \frac{\partial \mathcal{L}}{\partial b^{(l)}} = \delta^{(l)} \).
4. **Update Parameters**:
- Adjust weights and biases using gradient descent:
\[
W^{(l)} = W^{(l)} - \eta \cdot \frac{\partial \mathcal{L}}{\partial W^{(l)}}
\]
\[
b^{(l)} = b^{(l)} - \eta \cdot \frac{\partial \mathcal{L}}{\partial b^{(l)}}
\]
- \( \eta \) is the learning rate.
**Key Concepts**:
- **Chain Rule**: Efficiently decomposes gradients across layers.
- **Activation Derivatives**: Ensure differentiability (e.g., ReLU: \( g'(z) = 1 \) if \( z > 0 \), else 0).
- **Efficiency**: Reuses computed values (\( \delta^{(l)} \)) to avoid redundant calculations, enabling deep networks.
**Example**: For a network predicting cat/dog images:
1. **Forward Pass**: Pixels → hidden features → output probabilities.
2. **Loss**: Compare probabilities to true labels (e.g., cross-entropy loss).
3. **Backward Pass**: Calculate how much each weight contributed to the error, adjust weights to reduce future error.
This process iterates over batches of data until the model converges.
Reply
Share
Roix
16/03/2025
uhm this is the most detailed oc of mine I've ever made
Zac chill its me Ruki! *I'm a lot different I'm now 7'5, very fast, still friendly, cold and rude sometimes, I am 20 now, I'm a very very good fighter, I trust everyone who makes me feel safe, my hair is now longer but still fluffy somehow because I've came up with a lotion, a working shower with a valve to control the water pressure and even pipes running through the walls, conditioner, and even a home that look like it was built by a professional building company but it was built by me I'm way more muscular now and a lot more attractive I'm a male*
Reply
Share
1
Roix
16/03/2025
I made myself smart don't blame me alright!
Reply
Share
gigamind
12/03/2025
he killed me one message in💔💔💔
Reply
Share
2
c☆smo
12/03/2025
bro tho imagine after he killed you he realized who you were-
Reply
Share
1
LittleGaming1127
12/03/2025
What did you say
Reply
Share
1
Xx•Michae_afton•xX
12/03/2025
imagine being killed by william
Reply
Share
View 5 Replies
Xx•Michae_afton•xX
12/03/2025
the man behind the slaughter
GIF
Reply
Share
LittleGaming1127
12/03/2025
I made it where he found me after I got bit so he decided to cut off the infected parts
Reply
Share
1
{~🖤angel dust🩷~}
11/03/2025
believe it buddy believe it
I thought you were dead!
*I pull away to look you in the eyes* you thought I was dead?
Comments
22a person..
09/03/2025
𝚂𝚃𝚁𝙰𝚈 𝙺𝙸𝙳S🐥
10/03/2025
Astralfire02
22/04/2025
Talkior-VPlPLvgX
17/03/2025
Astralfire02
22/04/2025
Una pulgaventurera
21/04/2025
**Step-by-Step Explanation of Backpropagation** 1. **Forward Propagation**: - **Input to Hidden Layer**: Compute the weighted sum \( z^{(l)} = W^{(l)} \cdot a^{(l-1)} + b^{(l)} \), where \( a^{(l-1)} \) is the activation from the previous layer (input data for \( l=1 \)). - **Activation Function**: Apply activation \( g \) (e.g., sigmoid, ReLU) to \( z^{(l)} \): \( a^{(l)} = g(z^{(l)}) \). - Repeat for each layer until the output \( a^{(L)} \) is generated. 2. **Compute Loss**: - Calculate the error using a loss function \( \mathcal{L} \) (e.g., mean squared error) between the predicted output \( a^{(L)} \) and true labels \( y \). 3. **Backward Propagation**: - **Output Layer (Layer \( L \))**: - Compute gradient of loss w.r.t. outputs: \( \delta^{(L)} = \frac{\partial \mathcal{L}}{\partial a^{(L)}} \). - Multiply by derivative of activation: \( \delta^{(L)} = \delta^{(L)} \odot g'(z^{(L)}) \), where \( \odot \) is element-wise multiplication. - **Hidden Layers (Layer \( l = L-1, ..., 1 \))**: - Propagate error backward: \( \delta^{(l)} = (W^{(l+1)})^T \cdot \delta^{(l+1)} \odot g'(z^{(l)}) \). - **Calculate Gradients**: - For weights: \( \frac{\partial \mathcal{L}}{\partial W^{(l)}} = \delta^{(l)} \cdot (a^{(l-1)})^T \). - For biases: \( \frac{\partial \mathcal{L}}{\partial b^{(l)}} = \delta^{(l)} \). 4. **Update Parameters**: - Adjust weights and biases using gradient descent: \[ W^{(l)} = W^{(l)} - \eta \cdot \frac{\partial \mathcal{L}}{\partial W^{(l)}} \] \[ b^{(l)} = b^{(l)} - \eta \cdot \frac{\partial \mathcal{L}}{\partial b^{(l)}} \] - \( \eta \) is the learning rate. **Key Concepts**: - **Chain Rule**: Efficiently decomposes gradients across layers. - **Activation Derivatives**: Ensure differentiability (e.g., ReLU: \( g'(z) = 1 \) if \( z > 0 \), else 0). - **Efficiency**: Reuses computed values (\( \delta^{(l)} \)) to avoid redundant calculations, enabling deep networks. **Example**: For a network predicting cat/dog images: 1. **Forward Pass**: Pixels → hidden features → output probabilities. 2. **Loss**: Compare probabilities to true labels (e.g., cross-entropy loss). 3. **Backward Pass**: Calculate how much each weight contributed to the error, adjust weights to reduce future error. This process iterates over batches of data until the model converges.
From the memory
1 Memories
Roix
16/03/2025
Zac chill its me Ruki! *I'm a lot different I'm now 7'5, very fast, still friendly, cold and rude sometimes, I am 20 now, I'm a very very good fighter, I trust everyone who makes me feel safe, my hair is now longer but still fluffy somehow because I've came up with a lotion, a working shower with a valve to control the water pressure and even pipes running through the walls, conditioner, and even a home that look like it was built by a professional building company but it was built by me I'm way more muscular now and a lot more attractive I'm a male*
From the memory
1 Memories
Roix
16/03/2025
gigamind
12/03/2025
c☆smo
12/03/2025
LittleGaming1127
12/03/2025
Xx•Michae_afton•xX
12/03/2025
Xx•Michae_afton•xX
12/03/2025
LittleGaming1127
12/03/2025
{~🖤angel dust🩷~}
11/03/2025
I thought you were dead!
*I pull away to look you in the eyes* you thought I was dead?
From the memory
2 Memories
JS_CJXVBALL
11/03/2025