TIP: if Git is setup correctly, Git knows we're working with Python, so commands are clearer.
setup: echo '*.py filter=python' >> .gitattributes
Example: find the usage/definition of formatted_tax:
$ git grep -p formatted_tax
app/models.py=2083=class Order(CreatedMixin):
app/models.py:2454: 'tax': self.formatted_tax,
The second line says the symbol was found on line 2454. The first line shows the context, the symbol was found inside the class Order! Very useful
related: 'git grep' and Language-Aware Diffs
Friday, August 28, 2015
slides: Practical Python Testing
The talk at The Black Tux last night went really well! Here are the slides:
- Practical Python Testing (google docs)
and me:
- Practical Python Testing (google docs)
and me:
Sunday, August 23, 2015
Django: speed up Sqlite 1000x!
I'm working on a project analyzing large code bases. For just messing around, I'm using Sqlite. A strange thing happened when updating ~10,000 rows: they were really slow! Sqlite was updating about 10 records a second.
It turns out Sqlite correctly cares about your data, so it makes a transaction around the UPDATE statement, verifying that each and every bit of your data hits disk before returning. For me, if disaster strikes and I lose my tiny database I can recreated in a few seconds, so I decided to live life dangerously, and tell Sqlite to go ahead and try to update data, but don't wait around for it to hit disk.
Here's the magic. Put this before your bulk INSERT/UPDATE statements. Note that it affects the entire session, so you don't want to do this before your valuable data-manipulation commands.
Three commands sped up my program a zillion percent. The entire 10K rows updated in a fraction of a second, vs minutes. Yay!
It turns out Sqlite correctly cares about your data, so it makes a transaction around the UPDATE statement, verifying that each and every bit of your data hits disk before returning. For me, if disaster strikes and I lose my tiny database I can recreated in a few seconds, so I decided to live life dangerously, and tell Sqlite to go ahead and try to update data, but don't wait around for it to hit disk.
Here's the magic. Put this before your bulk INSERT/UPDATE statements. Note that it affects the entire session, so you don't want to do this before your valuable data-manipulation commands.
from django.db import connection
if connection.vendor == 'sqlite':
connection.cursor().execute('PRAGMA synchronous=OFF')
Three commands sped up my program a zillion percent. The entire 10K rows updated in a fraction of a second, vs minutes. Yay!
talk: Practical Python Testing
I'm speaking at this month's SoCal Python Meetup! I'll post slides and notes here soon.
http://www.meetup.com/socalpython/events/224586741/
http://www.meetup.com/socalpython/events/224586741/
Thursday, August 13, 2015
How and why we use DevOps checklists - Server Density Blog
In the health care and airline industries, simple checklists save thousands of lives. Here are several clear examples how the same technique is used in DevOps:
How and why we use DevOps checklists - Server Density Blog
How and why we use DevOps checklists - Server Density Blog
Friday, August 7, 2015
tip: easily run Postgres administration commands
TIP: on Linux it helps to be the Postgres user to do administration stuff with the database. Either do some configuration twiddling, or run commands as the postgres user:
sudo su -c 'dropdb mydb' postgres
In Ansible, this is:
- name: database -- zap database
command: dropdb mydb
sudo: yes
sudo_user: postgres
Subscribe to:
Posts (Atom)