<![CDATA[Jaigouk.Kim]]>https://jaigouk.com/https://jaigouk.com/favicon.pngJaigouk.Kimhttps://jaigouk.com/Ghost 4.42Tue, 26 Jul 2022 16:29:38 GMT60<![CDATA[GPT-3 Codex]]>https://jaigouk.com/gpt-3-codex/62d64d3c8ff0384a008c3809Tue, 19 Jul 2022 07:12:08 GMT

When I saw  "Using GPT-3 to explain how code works" on HN, I was skeptical. Recently, I was able to access OpenAI. So I played with OpenAI'S gpt-3 Codex. It is still in private beta status. But I was surprised by the result. It was quite useful.

Experiment

"Go with Test" Gitbook author wrote following steps for code snippets that he will refactor.

1.Write HTTP responses, send headers, status codes, etc.
2.Decode the request's body into a User
3.Connect to a database (and all the details around that)
4.Query the database and applying some business logic depending on the result
5.Generate a password
6.Insert a record

And here is the result from Codex

1.check if there is proper json body or error
2.check if username already exists in users datastore, if so, 400
3.else insert user right away
4.return 200

The golang code I used:

// Registration function
func Registration(w http.ResponseWriter, r *http.Request) {
    var res model.ResponseResult
    var user model.User

    w.Header().Set("Content-Type", "application/json")

    jsonDecoder := json.NewDecoder(r.Body)
    jsonDecoder.DisallowUnknownFields()
    defer r.Body.Close()

    // check if there is proper json body or error
    if err := jsonDecoder.Decode(&user); err != nil {
        res.Error = err.Error()
        // return 400 status codes
        w.WriteHeader(http.StatusBadRequest)
        json.NewEncoder(w).Encode(res)
        return
    }

    // Connect to mongodb
    client, _ := mongo.NewClient(options.Client().ApplyURI("mongodb://127.0.0.1:27017"))
    ctx, _ := context.WithTimeout(context.Background(), 10*time.Second)
    err := client.Connect(ctx)
    if err != nil {
        panic(err)
    }
    defer client.Disconnect(ctx)
    // Check if username already exists in users datastore, if so, 400
    // else insert user right away
    collection := client.Database("test").Collection("users")
    filter := bson.D{{"username", user.Username}}
    var foundUser model.User
    err = collection.FindOne(context.TODO(), filter).Decode(&foundUser)
    if foundUser.Username == user.Username {
        res.Error = UserExists
        // return 400 status codes
        w.WriteHeader(http.StatusBadRequest)
        json.NewEncoder(w).Encode(res)
        return
    }

    pass, err := bcrypt.GenerateFromPassword([]byte(user.Password), bcrypt.DefaultCost)
    if err != nil {
        res.Error = err.Error()
        // return 400 status codes
        w.WriteHeader(http.StatusBadRequest)
        json.NewEncoder(w).Encode(res)
        return
    }
    user.Password = string(pass)

    insertResult, err := collection.InsertOne(context.TODO(), user)
    if err != nil {
        res.Error = err.Error()
        // return 400 status codes
        w.WriteHeader(http.StatusBadRequest)
        json.NewEncoder(w).Encode(res)
        return
    }

    // return 200
    w.WriteHeader(http.StatusOK)
    res.Result = fmt.Sprintf("%s: %s", UserCreated, insertResult.InsertedID)
    json.NewEncoder(w).Encode(res)
    return
}

And then, I tried the refactored code

package main

import (
    "encoding/json"
    "fmt"
    "log"
    "net/http"
)

// User represents a person in our system.
type User struct {
    Name string
}

// UserService provides ways of working with users.
type UserService interface {
    Register(user User) (insertedID string, err error)
}

// UserServer provides an HTTP API for working with users.
type UserServer struct {
    service UserService
}

// NewUserServer creates a UserServer.
func NewUserServer(service UserService) *UserServer {
    return &UserServer{service: service}
}

// RegisterUser is a http handler for storing users.
func (u *UserServer) RegisterUser(w http.ResponseWriter, r *http.Request) {
    defer r.Body.Close()

    var newUser User
    err := json.NewDecoder(r.Body).Decode(&newUser)

    if err != nil {
        http.Error(w, fmt.Sprintf("could not decode user payload: %v", err), http.StatusBadRequest)
        return
    }

    insertedID, err := u.service.Register(newUser)

    if err != nil {
        //todo: handle different kinds of errors differently
        http.Error(w, fmt.Sprintf("problem registering new user: %v", err), http.StatusInternalServerError)
        return
    }

    w.WriteHeader(http.StatusCreated)
    fmt.Fprint(w, insertedID)
}

// MongoUserService provides storage functionality for Users.
type MongoUserService struct {
}

// NewMongoUserService creates a new MongoUserService managing connection pools etc probably!.
func NewMongoUserService() *MongoUserService {
    //todo: pass in DB URL as argument to this function
    //todo: connect to db, create a connection pool
    return &MongoUserService{}
}

// Register will store a user in mongo.
func (m MongoUserService) Register(user User) (insertedID string, err error) {
    // use m.mongoConnection to perform queries
    panic("implement me")
}

// main function
func main() {
    mongoService := NewMongoUserService()
    server := NewUserServer(mongoService)
    log.Fatal(http.ListenAndServe(":8000", http.HandlerFunc(server.RegisterUser)))
}

// describe main function and NewUserServer function and RegisterUser 
// 1.

1. main function creates a mongo service and a user server
2. main function starts a http server and registers the RegisterUser handler
3. RegisterUser handler is called when a request is made to the server
4. RegisterUser handler calls the UserService to store the user

Of course it is not perfect. You need to express your intensions well enough. But can you imagine the future where Codex is used everywhere? This is more than using Copilot. "Ruby on Rails" became famous in 2000s. Because it was easy to express your thoughts in code and build prototype fast. So lots of startups launched their products quickly and iterated them based on customer insights. From my point of view, Copilot and Codex are helping me to build a product. Soon, it would become like having a senior developer with me.

Concerns

Is it encrypted? People think Copilot is evil because open source contributors never agreed on this.

Give Up GitHub: The Time Has Come!
Those who forget history often inadvertently repeat it. Some of us recall that twenty-one years ago, the most popular code hosting site, a fully Free and Open Source (FOSS) site called SourceForge, proprietarized all their code — never to make it FOSS again. Major FOSS projects slowly left Source…
GPT-3 Codex

On HN this was posted GPT-3 reveals my full name – can I do anything?

What's the current status of Personally Identifying Information and language models?
I try to hide my real name whenever possible, out of an abundance of caution. You can still find it if you search carefully, but in today's hostile internet I see this kind of soft pseudonymity as my digital personal space, and expect to have it respected.
When playing around in GPT-3 I tried making sentences with my username.

end to end encryption is a must. but more than that, we don't know that Codex won't share my data with authorities. It sounds absurd. BUT check out this article.

New documents reveal scale of US Government’s cell phone location data tracking

It will take time to have a codex as a commodity. Even for open source version of similar project requires 200GB of GPU ram. That is about 10 RTX 3090 cards. That is heavy budget for solo founders or for small sized companies. And gathering the dataset is also a problem. I guess big companies like OpenAI or Microsoft would not let it go. They will hold it and build high walls to dominate the market.

The Future

Designing would be something that AI can't do better than human(not generating mockups like following examples)

DALL·E 2
DALL·E 2 is a new AI system that can create realistic images and art from a description in natural language.
GPT-3 Codex
NVIDIA Canvas : Harness The Power Of AI
Create Backgrounds Quickly, or Speed up your Concept Exploration.
GPT-3 Codex

Designing services and digging into customers' pains would be the same. There are things that don't scale.

]]>
<![CDATA[Building docker image for rust]]>https://jaigouk.com/building-docker-image-for-rust/6273b244c78f8949396db4abThu, 05 May 2022 11:28:24 GMT

In these days, I am playing with rust.

Before using cargo-chef, it took 30 min to build a image on raspberry pi 4. even with "AS" layers.

with https://github.com/LukeMathWalker/cargo-chef, the intial build time took almost same but after that, building image took 2m 30s. For beefy machines, it would be much faster but building arm64 images on rpi matters to me because I am using k3s env as my homelab deployment environment.

FROM rust:1.60.0 AS chef
# We only pay the installation cost once,
# it will be cached from the second build onwards
RUN cargo install cargo-chef
WORKDIR /app

FROM chef AS planner
COPY . .
RUN cargo chef prepare  --recipe-path recipe.json

FROM chef AS builder
COPY --from=planner /app/recipe.json recipe.json
# Build dependencies - this is the caching Docker layer!
RUN cargo chef cook --release --recipe-path recipe.json
# Build application
COPY . .
ARG DATABASE_URL="postgres://username:[email protected]:5432/myapp_production?sslmode=disable"
ENV DATABASE_URL="${DATABASE_URL}"

ARG TEST_DATABASE_URL="postgres://username:[email protected]:5432/myapp_service_ci?sslmode=disable"
ENV TEST_DATABASE_URL="${TEST_DATABASE_URL}"

RUN cargo build --release --bin ssr

# We do not need the Rust toolchain to run the binary!
FROM debian:bullseye-slim AS runtime
# Application dependencies
# We use an external Aptfile for this, stay tuned
COPY Aptfile /tmp/Aptfile
RUN apt-get update -qq && DEBIAN_FRONTEND=noninteractive apt-get -yq dist-upgrade && \
  DEBIAN_FRONTEND=noninteractive apt-get install -yq --no-install-recommends \
    $(grep -Ev '^\s*#' /tmp/Aptfile | xargs) \
    && apt-get clean \
    && rm -rf /var/cache/apt/archives/* \
    && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* \
    && truncate -s 0 /var/log/*log

# I use dbmate to mantain db migrations
RUN wget --no-check-certificate https://github.com/amacneil/dbmate/releases/download/v1.15.0/dbmate-linux-arm64 \
  && mv dbmate-linux-arm64 /bin/dbmate \
  && chmod +x /bin/dbmate
  
WORKDIR /app
COPY --from=builder /app/target/release/ssr .
RUN mkdir static && mkdir db
COPY --from=builder /app/static ./static/
COPY --from=builder /app/db ./db/

ARG HOST_PORT="0.0.0.0:3000"
ENV HOST_PORT="${HOST_PORT}"

ARG LOG_LEVEL="info"
ENV LOG_LEVEL="${LOG_LEVEL}"

ENV DATABASE_URL="${DATABASE_URL}"
ENV TEST_DATABASE_URL="${TEST_DATABASE_URL}"

EXPOSE 3000

CMD ["/bin/bash"]

]]>
<![CDATA[hosting ghost blog with static pages on github]]>https://jaigouk.com/host-ghost-blog-with-static-pages-on-github/625cb85d3d364f64596f879fMon, 18 Apr 2022 02:30:38 GMT

I am hosting my ghost blog on github static page. but I am writing blog posts with ghost instance on rpi.

[Unit]
Description=gssg ghost publishing service
After=network.service
Requires=network.service

[Service]
TimeoutStartSec=10
Restart=always
User=myuser
Group=myuser
WorkingDirectory=/home/myuser/static-ghost/ghost-publisher
ExecStart=/home/myuser/.cargo/bin/cargo run --bin gssg

[Install]
WantedBy=multi-user.target

gssg is a simple rust actix-web service that triggers gssg in raspberry pi and then pushes to github pages. This is a faktory consumer.

There is another web hook service that is acting as a faktory producer. Because gssg takes time to fetch blog posts and pushing also takes time, this should be done as a long running background job.

here is the webhook part


    use actix_web::{get, post, web, App, HttpResponse, HttpServer, Responder};

    use faktory::{Producer, Job};
    use std::io::{self, Write};

    #[get("/")]
    async fn root() -> impl Responder {
        HttpResponse::Ok().body("root")
    }

    #[get("/health")]
    async fn health() -> impl Responder {
        HttpResponse::Ok().body("fine")
    }

    #[get("/gssg")]
    pub async fn gssg_handler() -> impl Responder {
        let mut p = Producer::connect(Some(FAKTORY_URL)).unwrap();

        p.enqueue(Job::new("gssg", vec!["rust"])).unwrap();
        HttpResponse::Ok().json("Ok")
    }

    #[actix_rt::main]
    async fn main() -> std::io::Result<()> {
        println!("Starting Web server");

        HttpServer::new(|| {
            App::new()
                .service(root)
                .service(health)
                .service(gssg_handler)
        })
        .bind(("0.0.0.0", 3001))?
        .run()
        .await
    }

and gssg bin. here I am using gssg queue which is same as above.

extern crate execute;
use std::process::Command;
use execute::Execute;

use faktory::ConsumerBuilder;
use std::io;


fn main() {
    let mut c = ConsumerBuilder::default();
    c.register("gssg", |job| -> io::Result<()> {
        println!("{:?}", job);

        const GSSG_PATH: &str = "/home/myuser/static-ghost/deploy/update-blog.sh";
        let mut command = Command::new(GSSG_PATH);
        let output = command.execute_output().unwrap();
        if let Some(exit_code) = output.status.code() {
            if exit_code == 0 {
                println!("Ok.");
                Ok(())
            } else {
                eprintln!("Failed.");
                Ok(())
            }
        } else {
            eprintln!("Interrupted!");
            Ok(())
        }
    });

    let mut consumer = c.connect(Some(FAKTORY_URL)).unwrap();

    if let Err(e) = consumer.run(&["default"]) {
        println!("worker failed: {}", e);
    }

}

updating blog via gssg

 #!/bin/bash

    rm -rf /home/myuser/github-user.github.io

    cd /home/myuser
    git clone [email protected]:github-user/github-user.github.io.git

    cd /home/myuser/github-user.github.io
    git pull
    rm -rf /home/myuser/github-user.github.io/docs

    /home/myuser/.npm-global/bin/gssg --dest /home/myuser/github-user.github.io/docs --domain http://local-ip:2368 --url https://mydomain.com

    echo "mydomain.com" > /home/myuser/github-user.github.io/docs/CNAME
    echo "blog.mydomain.com" >> /home/myuser/github-user.github.io/docs/CNAME
    echo "www.mydomain.com" >> /home/myuser/github-user.github.io/docs/CNAME

    git config user.name "My Name"
    git config user.email "[email protected]"

    date > updated_at

    git add .
    git commit -m "updated on `date +'%Y-%m-%d %H:%M:%S'`"
    git commit --amend --author="My Name <[email protected]>" --no-edit

    git push -f
]]>
<![CDATA[harbor on k3s]]>https://jaigouk.com/deploying-harbor/624ecb2f9001de766d023bcdSat, 03 Apr 2021 01:30:19 GMT

update(2022): I don't use harbor anymore with my local k3s cluster. instead, I am just using registry ui with private registry.

It took some time to find a simple and easy way to deploy registry with UI to my rpi k3s cluster. The official repo does not provide arm64 images and it requires high memory. I found a way to deploy harbor on my rpi 4 cluster.

git clone https://github.com/querycap/harbor.git
cd harbor
make dep
cd components/harbor

edit cue files

app.cue file

package harbor

import (
	harbor "github.com/goharbor/harbor-helm:chart"
)

{
	apiVersion: "octohelm.tech/v1alpha"
	kind:       "Release"
	metadata: namespace: "harbor-system"
	metadata: labels: context: "default"
	#name:      "harbor"
	#namespace: "harbor-system"
	#context:   *"hw-sg" | string

values.cue file

change the tag based on querycap`s harbor/harbor-portal

package harbor

#values: {
	host:          *"harbor.example.io" | string
	adminPassword: *"xxxxxx" | string

	image: repo: "ghcr.io/querycap/harbor"
	image: tag:  "ec0ba11"
}

install cue and deploy

go install cuelang.org/go/cmd/cue@latest

cuem k show commponents/harber > result.yaml

# edit result.yaml

kubectl apply -f result.yaml

harbor on k3s

dex OIDC

To make harbor more secure, I installed dex

here are the links for installing it

]]>
<![CDATA[work env in covoid19 erra]]>In these days, I stay at home most of the time and still I wish to change atmosphere, so I move room to room. When I do that, I use my android tablet.

old macbook air(2011 mid version) is a thin client which is running manjaro and I use

]]>
https://jaigouk.com/work-env-in-covoid19-erra/624ecb2f9001de766d023bc9Tue, 23 Mar 2021 21:57:00 GMT

In these days, I stay at home most of the time and still I wish to change atmosphere, so I move room to room. When I do that, I use my android tablet.

old macbook air(2011 mid version) is a thin client which is running manjaro and I use it with rpi. rpi is running code-server, ssh, etc. macbook is for keyboard, monitor, web browser. ssh env with port forward is used for coding env. with wireguard setup, code-server and nvim is accesible from anywhere. Most of iPad users are using code-server but I am using Samsung Galaxy Tab S6 lite for that purpose. And the tablet is also for taking notes during meetings.

Gl inet s750 slate(Openwrt) + 8GB rpi + usb ssd  is quite nice combination. I just put it in my bag with a power bank and then I am ready to use them remotely.

]]>
<![CDATA[istio]]>https://jaigouk.com/istio/624ecb2f9001de766d023bcbTue, 23 Mar 2021 21:08:08 GMT

notes while reading "istio in action"

A service mesh is a distributed application infrastructure that is responsible for handling network traffic on behalf of the application in a transparent, out of process manner.
The service proxies form the "data plane" through which all traffic is handled and observed. The data plane is responsible for establishing, securing, and controlling the traffic through the mesh. The management components that instruct the data plane how to behave is known as the "control plane". The control plane is the brains of the mesh and exposes an API for operators to manipulate the network behaviors. Together, the data plane and the control plane provide important capabilities necessary in any cloud-native architecture

istio

istio

For helm v3.2.4 and k3s arm64 env,

helm repo add querycapistio https://querycap.github.io/istio
kubectl create namespace istio-operater
helm upgrade --install istio-operater querycapistio/istio-operator

output

Release "istio-operater" has been upgraded. Happy Helming!
NAME: istio-operater
LAST DEPLOYED: Fri Mar 26 17:43:16 2021
NAMESPACE: default
STATUS: deployed
REVISION: 4
TEST SUITE: None

for k3s, make sure traefik is not running

k3sup install --ip  xx.xx.xx.xx \
        --user my_user \
        --ssh-key ~/.ssh/my_key \
        --ssh-port 22  \
        --k3s-extra-args "--disable traefik"

# ssh to the server 
sudo rm /var/lib/rancher/k3s/server/manifests/traefik.*

k3sup join --server-ip xx.xx.xx.xx \
        --ip yy.yy.yy.yy \
        --user my_user \
        --ssh-key ~/.ssh/my_key \
        --ssh-port 22

use istio-system directory to deploy via kustomize

kubectl apply -k <kustomization_directory>

after this, manually change nodeAffinity in deployment. remove other arch values and change value to arm64 for istio-ingressgateway and istio-egressgateway deployments.

      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - arm64
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 2
              preference:
                matchExpressions:
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - amd64

istio

]]>
<![CDATA[Switching between git profiles]]>Thinking of yourself as a separate entity can reduce anxiety, while also kicking up some major benefits for your confidence and determination. - David Robson, BBC.com

This article(The 'Batman Effect': How having an alter ego empowers you) was featured on HN best. It would be more

]]>
https://jaigouk.com/switching-between-git-profiles/624ecb2f9001de766d023bc8Sat, 02 Mar 2019 09:52:00 GMTThinking of yourself as a separate entity can reduce anxiety, while also kicking up some major benefits for your confidence and determination. - David Robson, BBC.comSwitching between git profiles

This article(The 'Batman Effect': How having an alter ego empowers you) was featured on HN best. It would be more productive for you to have a separate machine and git profile to be more focused on side projects.  But maintaining multiple profiles can be painful. Sometimes, it's easy to switch and commit your codes and push. So, I have my own setup to prevent a situation like that.

You can automate switching users based on directory.

For zsh,

precmd_functions=(switch_git_user)
switch_git_user() {
  if [[ $PWD == "$HOME/user/git_repo" ]]; then
    $HOME/bin/switch hack

    cat ~/.gitconfig
  fi
}

with the switch script and precmd above, I can just switch git profile automatically.

]]>
<![CDATA[Serverless Edge Computing]]>https://jaigouk.com/serverless-edge-computing/624ecb2f9001de766d023bc7Tue, 27 Nov 2018 01:11:03 GMT

What happend after I attended kandddinsky conf in October? I found that I got too lazy. I bought Kindle and read DDD Distilled and other books. There were lots of crazy ideas going on in DAG based distributed ledger thingy. And I got interested in edge computing. Yeah, I remember I've read the article from HN about Webassembly, But I haven't gone to the end of the post. Today, I read it again.

Serverless Edge Computing
source: https://hacks.mozilla.org/2018/10/webassemblys-post-mvp-future/

Fastly launched https://www.fastlylabs.com/ recently and I was able to find code snippets that looks familiar to me from Tyler McMullen's talk about Fastly lab. Well, I guess I don't need AWS lambda for these any more.

Serverless Edge Computing
Serverless Edge Computing
Serverless Edge Computing
source: "Software Fault Isolation and Edge Computing" Fastly CTO Tyler McMullen
Serverless Edge Computing
source: https://github.com/uTensor/uTensor

Also, there are cool open source projects out there like noise from Perlin.net.

Serverless Edge Computing
source: https://github.com/perlin-network/noise

Well, this doesn't end here. Combined with a tool like intel's movidius neural compute stick which costs $70 and tinker boards, then you can have something poweful in your hand. As Perlin's dev team envisioned you can have quantum computing in your palm. Since those companies are doing it as of now. I think "A supercomputer in the palm of your hand." will happen very soon.

]]>
<![CDATA[Map to the future]]>in 90's, mp3 came up. and then lots of mp3 players showed up in the market. but in the end. ipod killed them all. and it evolved to iphone and it changed the world. It's about connecting dots for meaningful result to customers. And yes, they

]]>
https://jaigouk.com/map-to-future/624ecb2f9001de766d023bc6Mon, 26 Nov 2018 13:28:26 GMT

in 90's, mp3 came up. and then lots of mp3 players showed up in the market. but in the end. ipod killed them all. and it evolved to iphone and it changed the world. It's about connecting dots for meaningful result to customers. And yes, they won't describe the future in detail. You need to live in the future by living on the edge without losing focus. And you need to see those dots and customer's pain clearly.

https://www.fastcompany.com/90247240/exclusive-lisa-strausfeld-is-developing-an-entirely-new-kind-of-data-viz

“We’ve seen this diagram for neural networks, social networks, organizational structures. I’ve found these diagrams almost unintelligible. They just convey connectivity and complexity.”
Map to the future
[Image: courtesy Lisa Strausfeld]

The prototype aims to demystify networks by allowing you to explore them virtually. To use it, you put on a VR headset and enter a three-dimensional timeline that is minimal and neutral on purpose–a counterbalance to more cinematic or illustrative VR experiences. Historically important women’s lives, drawn from a women’s history project at the New School, are visualized along the X axis. Each woman’s life stretches through the decades, represented by a long series of rectangles that symbolize years. Faint lines drawn between the women signify connections–friendships, correspondences, influences through the years.

blue ocean strategy is another one to see the gap.

wardleymaps

https://medium.com/wardleymaps/on-being-lost-2ef5f05eb1ec

Map to the future
Map to the future
Map to the future
Map to the future
]]>
<![CDATA[Introducing OpenFaaS to the company]]>https://jaigouk.com/evangelizing-openfaas-at-work/624ecb2f9001de766d023bbcFri, 07 Sep 2018 16:55:26 GMT

My motivations to use openfaas were,

  • learning tool for new languages
  • getting used to kubernetes
  • learn serverless architecture with no cost for aws

I gave a tech talk to other developers at work with a demo and some people asked me to form a workshop. So, I cloned openfaas workshop repo and I got a permission from head of engineering to go through it 1 hour per week. In the beginning there were 17 devs. as time passes, people were not able to attend it because they were simply busy with their work.

Some people followed the lab notes if they were not able to attend the workshop. we have following lab note structure.

## Lab 5 - Create a Gitbot

[official openfaas lab5 note](https://github.com/openfaas/workshop/blob/master/lab5.md)


** feel free to update our notes.**

TOC

1. following the offical lab note
2. what we learned
3. ref

After 10th workshop session is done, i wanted to push this to the next phase. But there were concerns about openfaas from other devs. It's still young and people didn't want to use it for our production environment. But it is ok to use it for internal tools. Not used for production but the impact can be huge based on the problem I tackle. Luckily, head of frontend team was enthusiastic about using openfaas. Because i told him that i was able to write graphql functions that are connected to Neo4j. It needs to be polished but i was able to show the working query.  The guy had a real problem. It was about having lots of PRs that are hanging for a long time. The repository is a legacy codes. And people hate to dive into legacy codes. Well, as a "smart developer", you can find ways to avoid the situation. in the end, the legacy repository ended up with those PRs that are waiting people to review and QA. It's not about serverless architecture. it's about code ownership and having a solid CI/CD work process. I suggested him to form a plan to do that. git-ops can be a candidate if it's well crafted with QA steps.

There might something I can help. And then I remembered OpenFaaS community is using Derek - a serverless 🤖 to manage PRs and issues.

If we add more "pipe lines" that makes sense to the frontend team, then we can use it to improve our daily work process. Yes, I agree that it's more about people problem. but it will accelerate the plan. Now, I found a chance to use openfaas at work at last. Good thing about my situation is that our backend will be broken into services and we're going to use kafka as our message queue. And it's possible to launch openfaas function with kafka-connector. (you can trigger your functions via sns, rabbit mq and cloud events too)

People will see it working and feel good about derek and automation for chores. If people start to add more features or functions then they might want to replicate the experience for production environment. we can do it with "stable" service like aws lambda in the beginning but it will cost money. then openfaas is a good option. isn't it?

]]>
<![CDATA[Message queues]]>https://jaigouk.com/message-queues/624ecb2f9001de766d023bbbSun, 02 Sep 2018 00:25:29 GMTMessage queues

Message queues are the backbone of your services.

If you have gone through MVP and passed the monolithic situation, you are probably thinking about something has to be done. There are lots of problems on the table. Especially, your organization has grown more than 15 devs, then it's possible that they want to use python for ML, golang for infrastructure, elixir for graphql api endpoint, etc. There are various ways to make it happen. grpc can be 1 possibility. Or, you can use message ques. After booming of mobile apps and IOT, the server side should be scalable and maintainable. And it's true that you can't just stick with 1 programming language and treat it as a silver bullet. Docker was the answer to that chaos and kubernetes is the winning orchestration tool. still, you need to figure out how to break things down and put them in kubernetes. Openfaas is using Nats-streaming and there are multiple options(kafka, aws sns, CloudEvents, RabbitMQ) to trigger your serverless functions.

While investigating message queue options out there, I found that Kafka is used at NYT and usually have more scores on HN.

I found that openfaas is a good example for using Nats Streaming. FAQ doc says it supports memory or data store for messages.

Message queues

And openfaas is using in memory store for messages. the following snippet is from openfaas' docker-compose.yml file. you can find kubernetes config from here

nats:
    image: nats-streaming:0.6.0
    # Uncomment the following port mappings if you wish to expose the
    # NATS client and/or management ports
    # ports:
    #     - 4222:4222
    #     - 8222:8222
    command: "--store memory --cluster_id faas-cluster"
    networks:
        - functions
    deploy:
        resources:
            limits:
                memory: 125M
            reservations:
                memory: 50M
        placement:
            constraints:
                - 'node.platform.os == linux'

I wanted to "replay" messages if something goes wrong. and then I found this comment from this PR in nats-queue-worker

If you use a NATS Streaming server with memory store, it is true that if the server is restarted, since no state is being restored, the previously "connected" clients will stop receiving messages. Publishers would fail too since the server would reject published messages for unknown client IDs.
The streaming server and streaming clients communicate through some inboxes. When the Streaming server is restarted, since it lost that knowledge, it can't communicate with existing clients. Moreover, even internal subjects used to communicate between the server and its clients contain a unique id that won't be the same after the restart).

Note: If the NATS Streaming server connects to a non-embedded NATS Server, then if the NATS Server itself is restarted, that is fine, the client library's use of the underlying NATS connection will reconnect and everything would work fine (some timeout may occur for the operations that were inflight when the NATS server was restarted). This is because the Streaming server would still be running and its state maintained, so the communication can continue.

Now, I am curious if I change openfaas' nats-streaming config with PV like this, then will openfaas fetch the previous messages even though the streaming server is down or nats-queue-worker is restarted? I will play with it and write the following up article.

]]>
<![CDATA[OpenFaaS on IBM Cloud]]>https://jaigouk.com/openfaas-on-free-kubernetes/624ecb2f9001de766d023bbaFri, 24 Aug 2018 11:54:18 GMT

Openfaas on Kubernetes - cheapest options

OpenFaaS on IBM Cloud

update(3 Sep 2018) : updated installing openfaas on local kube part.

Update(26 Aug 2018) : I talked with Alex. He told me, "A single node on DigitalOcean with 4GB RAM should be quite cost effective, same with Scaleway using kubeadm. Minikube on your own system is also a good option or with Docker for Mac. (EKS is not cheap) - GKE might be one of the cheaper managed options"

There are many options to use openfaas. Mainly, docker swarm or kubernetes. Kubernetes formed into Cloud Native Computing Foundation. They also have exam materials for certified admin / dev. There are still lots of companies out there using docker swarm as their container orchestration tool. But the trend clearly shows us that kubernetes won the battle. CNCF is the result. It's explicit and configurable. But the problem is installing and maintaining kubernetes on AWS is quite complicated. it takes some time to get used to all setup. With digitalocean, it's fairly easy compare to aws. And after september of 2018, they would release kubernetes as a service. At this point, the pricing page is not there. They might charge it based on droplets.

Then, what would be the optiono to get used to openfaas on kubernetes without spending any money? There are 2 options. local setup and ibm cloud. There are pros and cons. With local setup, it will eat up your resource. And you would not be able to show what you've been playing to other people. As long as you have enough resource, then this is ok. With IBM cloud, you need to switch to pay as you go plan. basically, you need to give them your credit card number. but the "lite" plan doesn't charge you for it. 4GB ram. and it expires in a month. And you can't use ingress. just NodePort. Public IP exists.

The ugly part of serverless thingy is the hidden cost as described here. Openfaas can limit the maximum cost while having the scalability. You have the control. No vendor lock in.

Option1. Openfaas on local kuberneres

I assume you're using OS X.

brew install kubernetes-cli
brew install kubectx

Install kubernetes and dashboard by following this github wiki page

kubectl config set-context docker-for-desktop
kubectl config use-context docker-for-desktop
kubectx
kubectl get all
kubectl cluster-info
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

kubectl describe services kubernetes-dashboard --namespace=kube-system
kubectl -n kube-system edit service kubernetes-dashboard
# edit .spec.type to NodePort
sudo kubectl proxy

visit the local dashboard

get your token to login to the dashboard

kubectl create serviceaccount jaigouk
kubectl get serviceaccounts jaigouk -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  # ...
secrets:
- name: jaigouk-token-1yvwg
kubectl get secret jaigouk-token-6dztl -o yaml

echo xxxxx== | base64 --decode

install openfaas

The faas-netes controller is the most tested, stable and supported version of the OpenFaaS integration with Kubernetes. In contrast the OpenFaaS Operator is based upon the codebase and features from faas-netes, but offers a tighter integration with Kubernetes through CustomResourceDefinitions. This means you can type in kubectl get functions for instance.

kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml

helm repo add openfaas https://openfaas.github.io/faas-netes/

helm repo update \
 && helm upgrade openfaas --install openfaas/openfaas \
    --namespace openfaas  \
    --set functionNamespace=openfaas-fn \
    --set operator.create=true

# generate a random password
export GW_PASS=$(head -c 16 /dev/random |shasum | cut -d ' ' -f1)

kubectl -n openfaas create secret generic basic-auth \
--from-literal=basic-auth-user=admin \
--from-literal=basic-auth-password=$GW_PASS

helm upgrade --reuse-values openfaas openfaas/openfaas \
    --set basic_auth=true

# verify it's running
kubectl --namespace=openfaas get deployments -l "release=openfaas, app=openfaas"

kubectl proxy

echo $GW_PASS | faas-cli login --gateway="127.0.0.1:31112" -u admin --password-stdin

visit http://localhost:31112/ui/
username is admin and password is $GW_PASS value

if you have trouble, visit https://github.com/openfaas/faas-netes/blob/master/chart/openfaas/README.md for details

Option2. Using kubernetes on IBM cloud

the IBM cloud gives us free 1 node kubernetes instance that expires in a month. As long as you automated everything then, it wouldn't be a big problem to play with it. For example, you can try "GitLab + OpenFaaS for Serverless CI/CD on Kubernetes". Check their free versus stand plan features from this doc.

# Download and install a few CLI tools and the IBM Kubernetes Service plug-in.

curl -sL https://ibm.biz/idt-installer | bash
ibmcloud login -a https://api.eu-de.bluemix.net
ibmcloud cs region-set eu-central
ibmcloud cs cluster-config gunship

export KUBECONFIG=/Users/$USER/.bluemix/plugins/container-service/clusters/gunship/kube-config-mil01-gunship.yml

kubectl get nodes

let's deploy openfaas.

git clone https://github.com/openfaas/faas-netes
# create name space

kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml

cd faas-netes && \kubectl apply -f ./yaml
# memory usage is 327MB

check your cluster's public ip. visit kubernetes dashboard and find the openfaas namespace. in Discovery and load balancing > Services > gateway, you will be able to see port that is exposed.

running 2x4(2 CPUs, 4 GB RAM, 100 GB HDD) cluster costs $77.76 for 744 hours(31 days). it's worth waiting for digital ocean's kubernetes solution at this point.

brew install kubernetes-helm
helm init --canary-image --upgrade

Useful tools

krex

Kubernetes Resource Explorer

https://github.com/kris-nova/krex

krex in action - tail logs
OpenFaaS on IBM Cloud

krex in action - describe & tail logs
OpenFaaS on IBM Cloud

krex in action - sh into pod
OpenFaaS on IBM Cloud

# installing
brew install ncurses
export PKG_CONFIG_PATH=/usr/local/opt/ncurses/lib/pkgconfig

ln -s /usr/local/opt/ncurses/lib/pkgconfig/formw.pc /usr/local/opt/ncurses/lib/pkgconfig/form.pc
ln -s /usr/local/opt/ncurses/lib/pkgconfig/menuw.pc /usr/local/opt/ncurses/lib/pkgconfig/menu.pc
ln -s /usr/local/opt/ncurses/lib/pkgconfig/panelw.pc /usr/local/opt/ncurses/lib/pkgconfig/panel.pc

go get -u github.com/kris-nova/krex
cd $GOPATH/src/github.com/kris-nova/krex
make all

kubectx

Fast way to switch between clusters and namespaces in kubectl

https://github.com/ahmetb/kubectx

kubectx in action

OpenFaaS on IBM Cloud

kubens in action

OpenFaaS on IBM Cloud

# installing
brew install kubectx
]]>
<![CDATA[Running OpenFaaS on Tinkerboard cluster]]>https://jaigouk.com/openfaas-on-tinkerboard-cluster/624ecb2f9001de766d023bb9Fri, 16 Feb 2018 17:15:15 GMTRunning OpenFaaS on Tinkerboard cluster

I wanted to play with serverless architecture but I didn't want to spend money before I get used to the concept. 1 year ago, I tried to do that and ended up using all credits on aws. What would be the cheaper way to learn it? I have been exploring the options. rpi is good but it only has 1GB ram. And then, I encountered asus tinker board. So, I gave it a shot. I bought 3 tinker boards(ARM-Cortex-A17 4x 1.8GHz, 2GB RAM, WLAN, Bluetooth) and 32GB sd cards and etc. I spend 303 euros.

I followed Karol Stepniewski's blog post and forked his repo. I used latest armbian image and Network part was fine opposed to his blog post.

Running OpenFaaS on Tinkerboard cluster

Now, it's time to dig into OpenFaaS.

]]>
<![CDATA[raspberry pi zero w for IOT dev env]]>https://jaigouk.com/raspberry-pi-zero-w-for-iot-dev-env/624ecb2f9001de766d023bb8Sat, 01 Apr 2017 00:22:56 GMT

Motivation

raspberry pi zero w for IOT dev env

Recently, I watched google i/o tech talks and I found that progressive web + physical web is quite interesting. I had an idea that i want to play with Bluetooth Low Energy devices. For prototying the idea, I bought a raspberry pi zero w. (yup. this is my second one). Before watching the techtalk, rpi0 meant nothing to me. Because I always wanted to have powerful small computers that I can make a cluster. Now, it makes sense to play with rpi0w. it's a good fit for me to play with it to prototype physical web related ideas.

prep

  • install virtualbox and ubuntu. prepare your disk that is larger than 16GB if it's a desktop version of ubuntu.
  • docker should be up and running locally

tl;dr

In your mix.exs add rpi0_ble

 def deps("rpi0") do
    [{:nerves_runtime, "~> 0.1.0"},
     {:"nerves_system_rpi0_ble", github: "jaigouk/nerves_system_rpi0_ble", tag: "v0.0.4", runtime: false},
     {:nerves_interim_wifi, "~> 0.1.0"}]
  end

burn your sdcard with following commands.

MIX_TARGET=rpi0 mix deps.get
MIX_TARGET=rpi0 mix firmware
#Burn to an SD card with 
MIX_TARGET=rpi0 mix firmware.burn

the image has iptables, ssh, erlang, node.js, bluez, git, vi, etc. Because it's for prototyping a ble app and testing it.

configure the image

I need to customize the default rpi0 because i need to add bluez. I found this tutorial from adafruit. Since nerves-project is using buildroot, there must be a way to add the libarary to my rpi0. And then I found this blog post

raspberry pi zero w for IOT dev env

I referenced this slide and getting started doc from nerves-project.

install and download libraries we need

sudo apt-get install git g++ libssl-dev libncurses5-dev bc m4 make unzip cmake
git clone [email protected]:nerves-project/nerves-system-br.git
git clone [email protected]:jaigouk/nerves_system_rpi0_ble.git
mkdir -p ~/.nerves/cache/buildroot
nerves_system_br/create-build.sh nerves_system_rpi0_ble/nerves_defconfig ~/.nerves/cache/buildroot
cd ~/.nerves/cache/buildroot

Now, let's configure our linux image.

  • Select base packages by running make menuconfig
  • Modify the Linux kernel and kernel modules with make linux-menuconfig
  • Enable more command line utilities using make busybox-menuconfig

make menuconfig will launch gui that helps you to add extra packages. I want to add iptables, git

│ Prompt: git
│   Location:
│     -> Target packages
│       -> Development tools

iptables

│ Prompt: iptables
│   Location:
│     -> Target packages
│       -> Networking applications

open ssh

│ Prompt: openssh
│   Location:
│     -> Target packages
│       -> Networking applications

openssl

 Prompt: openssl
│   Location:
│     -> Target packages
│       -> Libraries
│         -> Crypto

you can even find nginx, wireshark,etc in networking applications.

Run make savedefconfig after make menuconfig to update the nerves_defconfig in your System. it will spit out results like this.

umask 0022 && make -C /home/jaigouk/nerves_system_br/buildroot-2016.11.1 O=/home/jaigouk/.nerves/cache/buildroot/. savedefconfig
  GEN     /home/jaigouk/.nerves/cache/buildroot/Makefile

linux-menuconfig

│ Prompt: Bluetooth Low Energy (LE) features
│   Location:
│     -> Networking support (NET [=y])
│       -> Bluetooth subsystem support (BT [=y])

Run make linux-savedefconfig and cp build/linux-x.y.z/defconfig <your system>

umask 0022 && make -C /home/jaigouk/nerves_system_br/buildroot-2016.11.1 O=/home/jaigouk/.nerves/cache/buildroot/. linux-savedefconfig

we can add vim, grep, ifconfig, ping, ps, top, free, etc with following command. busybox-menuconfig only works when busybox is enabled.

make busybox-menuconfig

this is what getstarted doc says.

If your system doesn’t contain a custom Linux configuration yet, you’ll need to update the Buildroot configuration to point to the new Linux defconfig in your system directory. The path is usually something like $( )/linux-x.y_defconfig.
For Busybox, the convention is to copy build/busybox-x.y.z/.config to a file in the System repository. Like the Linux configuration, the Buildroot configuration will need to be updated to point to the custom config.

/home/jaigouk/.nerves/cache/buildroot/build/busybox-1.25.1/.config

So I copied defconfig to nerves_system_rpi0_ble/linux-4.4.defconfig and .config to nerves_system_rpi0_ble/nerves_defconfig

now my custom rpi0 config is ready to be used.

]]>
<![CDATA[In Berlin]]>

I moved to Berlin 1 month ago. I was curious what it's like to be in startup eco in Berlin. As a Korean, I heard good things about it. Lots of game companies and devs moved to Germany from South Korea for lots of reasons.

I talked with

]]>
https://jaigouk.com/in-berlin/624ecb2f9001de766d023bb7Mon, 06 Mar 2017 00:10:26 GMTIn Berlin

I moved to Berlin 1 month ago. I was curious what it's like to be in startup eco in Berlin. As a Korean, I heard good things about it. Lots of game companies and devs moved to Germany from South Korea for lots of reasons.

I talked with my colleagues in Berlin. U.S is not an ideal place for asian tech people to stay any more. Well, especially in these days, Trump is making it worse day by day. On HN, I saw an article that says tech companies in Silicon Valley started to think about relocating to Vancouver. The news about immigrants in U.S is familiar to me. Because I also have gone through those situations when I visited U.S before.

I hope wealth is more distributed by new political actions. Otherwise, people will not be able to contain their anger.

Anyways, it has been 1 month in Berlin now. I hope I can meet lots of awesome hackers here.

]]>