Skip to content

Instantly share code, notes, and snippets.

Keybase proof

I hereby claim:

  • I am kirel on github.
  • I am kirel (https://keybase.io/kirel) on keybase.
  • I have a public key ASAyUBlFMVnqoBrq1bRWkSVcUiyeW_uQZaw4tA_BmvAg6go

To claim this, I am signing this object:

FROM r-base:3.5.2
RUN apt-mark hold r-base-core
RUN apt-get update && \
apt-get install -y --no-install-recommends \
libcurl4-gnutls-dev \
libxml2-dev \
libmariadbclient-dev \
libssl-dev && \
apt-get clean && rm -rf /var/lib/apt/lists/*
collector = function(df) {
if (!is.data.frame(df)) stop("df must be data.frame")
structure(list(df=df, accessed=list()), class = "collector")
}
`[[.collector` = function(c, attr, ...) {
c$accessed = c(c$accessed, attr)
c$df[[attr]]
}
@kirel
kirel / proxy.js
Last active May 7, 2020 06:44
Proxy for basic auth.
// Usage: USER=user PASSWORD=pw TARGET=http://localhost:3838 PORT=8080 node proxy.js
var http = require('http'),
auth = require('basic-auth'),
httpProxy = require('http-proxy');
var proxy = httpProxy.createProxyServer({});
var server = http.createServer(function(req, res) {
var credentials = auth(req)
FROM cartography/osrm-backend-docker
RUN apt-get update && apt-get install -y software-properties-common && add-apt-repository ppa:ubuntu-toolchain-r/test
RUN curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -
RUN apt-get update && apt-get install -y wget nodejs g++-4.9
RUN apt-get update && apt-get remove -y lua5.1 liblua5.1-0-dev libluajit-5.1-dev \
&& apt-get autoremove -y \
&& apt-get install -y wget lua5.2 liblua5.2-dev libluabind-dev
RUN rm /usr/bin/g++ /usr/bin/gcc && ln -s /usr/bin/gcc-4.9 /usr/bin/gcc && ln -s /usr/bin/g++-4.9 /usr/bin/g++
import numpy as np
import pandas as pd
import re
from ftplib import FTP
import zipfile
import StringIO
FTP_SERVER = 'ftp-cdc.dwd.de'
PATH_HISTORICAL = 'pub/CDC/observations_germany/climate/daily/kl/historical'
PATH_RECENT = 'pub/CDC/observations_germany/climate/daily/kl/recent'
@kirel
kirel / Einwohner Wikipedia.csv
Last active February 13, 2018 13:24
Zugezogene
Rang Name 1970 1980 1990 2000 2010 2014 Bundesland
1. Berlin 3.208.719 3.048.759 3.433.695 3.382.169 3.460.725 3.469.849 Berlin
2. Hamburg 1.793.640 1.645.095 1.652.363 1.715.392 1.786.448 1.762.791 Hamburg
3. München 1.311.978 1.298.941 1.229.026 1.210.223 1.353.186 1.429.584 Bayern
4. Köln 849.451 976.694 953.551 962.884 1.007.119 1.046.680 Nordrhein-Westfalen
5. Frankfurt am Main 666.179 629.375 644.865 646.550 679.664 717.624 Hessen
6. Stuttgart 634.202 580.648 579.988 583.874 606.588 612.441 Baden-Württemberg
7. Düsseldorf 660.963 590.479 575.794 569.364 588.735 604.527 Nordrhein-Westfalen
8. Dortmund 640.642 608.297 599.055 588.994 580.444 580.511 Nordrhein-Westfalen
9. Essen 696.419 647.643 626.973 595.243 574.635 573.784 Nordrhein-Westfalen
library(rvest)
fetchPopWiki = function(query) {
tryCatch({
city <- read_html(paste0("https://www.wikipedia.de/go?q=",gsub(" ","+",query),"&l=de"))
table = city %>% html_nodes("table.infobox") %>%
html_table(fill=TRUE)
tds = city %>% html_nodes('th,td')
population = sub('([\\.0-9]+).*', '\\1', html_text(tds[grep('Einwohner|Bevölkerung', tds)[1] + 1])) # manchmal Bevölkerung
num = as.numeric(gsub('\\.','',population))
import mysql.connector
from sqlalchemy import create_engine
engine = create_engine('mysql+mysqlconnector://root:@localhost/test')
import pandas as pd
with engine.connect() as conn, conn.begin():
data = pd.read_sql_table('table', conn)
data
require 'json'
require 'csv'
require 'time'
merged = Dir.glob('Suchanfragen/Suchanfragen/*').collect { |f| JSON.load(open(f)) }.inject do |m, h|
m['event'].concat h['event']
m
end
queries = merged['event'].map {|h| [Time.at(h['query']['id'].first['timestamp_usec'].to_f/1000000).iso8601, h['query']['query_text']]}