Friday, December 11, 2009

Google App Engine vs. PHP

I've been messing around a bit with ip information, proxy detection, and geolocation at work, and wanted to build some of my own services in order to get a better understanding of them. So far, the only thing up at http://ip.emmesdee.com is a basic replacement for ipchicken and whatismyip, but what's been slowing my experimentation has been a couple of surprising limitations in Google App Engine.

In order to check for a tor proxy via tordnsel, it would be much easier if GAE could perform DNS queries. You have to think this is coming with their recent announcement of public DNS servers. (I wish they would host DNS too! dotster charges $10/yr for dns, and my friends down under have had problems with godaddy DNS)

In order to do basic geolocation, it would be simplest if I could just import a sql database and query it. Unfortunately, it is still far easier to just bang out a little PHP than it is to figure out how to migrate to BigTable.

All things considered, GAE still feels a bit beta, but hopefully these problems will be gone in another year or so of GAE.

Friday, October 16, 2009

Google App Engine as static web server, bazaar vs. git

I've been serving http://www.emmesdee.com as a google app ( http://msdewww.appspot.com ) since august 2008 without any problems, but I'm thinking about switching to DryDrop. My big resistance at this point is actually version control. When I did my research into distributed version control a couple of years ago, bazaar looked more interesting to me than git, partially because it was more like svn to me than git, and partially because it seemed the easiest to host. All you need for a "repository" is just ssh access to a linux box and a directory.

Wednesday, July 29, 2009

grails conditional bootstrapping

I've been fortunate enough to fit grails into a new development project, and it's definitely made things more pleasant. I'm still missing eclipse and autocompletion a bit, but overall I am more productive. Anyway, I've created a bunch of objects in my grails-app/conf/BootStrap.groovy for testing, that also happen to be good initial values for production. However, they're getting created every time I restart the app, causing duplication! I've seen many references to checking GrailsUtil.environment, i.e.
def init = {  
if (GrailsUtil.environment=="development") initData();
}
This is close, but this doesn't do what I want. When I'm doing some of my testing, I don't need my objects created, as I want to persist my data between restarts. The decision to re-create my objects isn't the environment, it's whether I've told my DataSource to delete all data. That happens when the dbCreate property is "create" or "create-drop".
def init = {
// only bootstrap if data was deleted
def dbCreate = org.codehaus.groovy.grails.commons.ConfigurationHolder.config.dataSource?.dbCreate
if(dbCreate.startsWith("create")) {
initData();
}
}
Please forgive the non-grooviness of my code, I still like my groovy code to read like java.

Thursday, June 18, 2009

google app engine monitoring service lessons learned

As mentioned earlier, we were in need of some monitoring for our co-loc. After some experimentation, it turned out to be easiest to build in the co-loc almost entirely in nagios. For the external monitoring component, I deployed a web proxy called mirrorrr onto google app engine to provide us with basic functionality from outside of the co-loc to check dns and other potential network issues. I need to follow up with a single monitor on the co-loc itself from google app engine, but our existing monitor from siteuptime can suffice for now.

Nagios appears to have a couple annoying quirks regarding DNS though. It seems to be rather insistent on quering IP addresses for low level services, rather than doing more of a system-level monitoring.
  • check_http appears to resolve the hostname into an IP address before checking it, unless the -H option is used. This breaks google app engine and anything else that uses a virtual hosts-like mechanism to determine what page to serve.
  • check_smtp does not seem to do an MX lookup for a host. Instead, it resolves to our web server and tries to open the web server for SMTP.
  • mirrorrr has a 1 hour cache by default. It should be minimized or disabled when used for monitoring.
  • TBD: The local mirrorrr install should probably get an IP range filter added to it so that it is more difficult to DOS.
  • nagios isn't too happy about passing messages around machines. My main options appear to involving choosing the least of three evils:
    • adding a private key to the nagios machine and sshing everywhere
    • installing NPRE as a daemon on every machine and querying them (it's not that lightweight and most of the servers need more memory badly)
    • calling ncsa_send from some cron shell scripts to a relatively insecure mechanism on the nagios machine. I opted for ncsa_send, but the port is only visible to the intranet, and there are a couple of machines in a dmz that can't reach it easily.

Wednesday, May 20, 2009

Grails Integration

I finally got the chance to do a bit of green field development, and a good excuse to integrate grails. I've always had trouble justifying the use of it in the past, because it is difficult to extend an existing java project with grails. However, I the opportunity to break off a piece of logic and implement it using grails while using the existing java app to read the results.

Requirements that allowed for easy grails:
New database used to store data
Standalone CRUD GUI
database used as primary form of communication to other components

Inputs:
Reporting component
Job requests through a mysql table
Standalone CRUD GUI

Outputs:
2 mysql tables to replicate to the front end
6 mysql tables to replicate to the reporting server

Tuesday, March 24, 2009

google app engine monitoring service

My plate is too full to start this project, but we need a simple monitoring service for our service, similar to nagios. I'm thinking about throwing this together on the new Google App Engine, because hosting a nagios server in our data center is pointless and it seems like a good fit.

Update: I ended up using nagios after all, and google app engine as a simple web proxy for round trip http testing.

Monday, January 5, 2009

Exporting MySQL to CSV

For some quick reporting, I needed to export some data from a table into csv format, and this proved to be particularly handy.

SELECT * INTO OUTFILE 'output.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"' LINES TERMINATED BY '\n' FROM users;

http://forums.mysql.com/read.php?79,11324,11324#msg-11324 contains enough info on both of the common approaches, the other of which uses sed.

Saturday, November 15, 2008

xen on fc8 lessons learned

So, one of the things we had to do recently was to move off of amazon ec2 and appnexus, onto our own environment in the datacenter. Our datacenter environment runs fedora (fc8), and we don't have bandwidth at the moment to upgrade to fc9 or centos. You'd think that installing xen would be straightforward, but there were a lot of quirks that I ran across.

1) The machine I had to repurpose was a fc8 machine running on lvm volumes. However, the volumes were set up as a small swap volume, and a single volume mounted as root that contained all available storage. Thankfully, the directions found here were a lifesaver in resizing the root partition without a drive down to the datacenter. http://blog.emmesdee.com/2008/10/remote-resizing-root-lvm-partiion.html
2) I never did manage to create instances successfully from a config file. I did have success using the virt-install tool. I created a 5 GB file-based instance using the following for my install location: http://download.fedora.redhat.com/pub/fedora/linux/releases/8/Fedora/x86_64/os/
3) I used the virt-clone tool to copy my first working instance to other instances, copying my 5 GB instance to 30GB and 100GB lvm volume-based instances. However, copying from one lvm instance to another did not produce working instances. In addition, it is probably faster to copy a 5 GB instance into a 100GB volume, and then resize it, than to copy 100GB instances, as they don't appear to be sparse.
4) Allocating memory to an instance removes it from domain-0. If you take away enough memory from domain-0 that it does not have enough memory, it generates a fatal error, grinding the entire system to a halt. It would sometimes reboot successfully, but generally would reproduce the fatal error shortly aftewards, and sometime would not reboot successfully. An unsuccessful reboot for me means a trip to our datacenter. I did not have time to do conclusive testing, but it seemed that dropping below allocated memory would immediately produce the fatal error, and it seemed that buffers counted.

Friday, October 17, 2008

Remote resizing a root lvm partiion

Hmm... trip to the datacenter, or figure out how to reduce the size of the big server's root partition without booting to a CD?

Thankfully, someone else solved the problem, as you only get one shot at it. Here's the summary:

Recompile resize2fs statically and to avoid some check
Add e2fsck and resize2fs to your initrd
Add the commands after the mkrootdev line in the init script.
Create a new initrd, reboot with it, and cross your fingers.
Come back after a game of ping pong and see if the machine is reachable.

Reducing the size of your root partition. (tummy.com, ltd. Journal Entry)

Thursday, October 2, 2008

Google App Engine thoughts

Google App Engine (GAE) appears to be quite promising. The most promising aspect of all is the price, free for less than 5 million page views per month. Unfortunately, Google Page Creator was a service that allowed you to create simple web pages, which is being phased out by the end of the year, so it's unclear to me whether GAE will always provide the free service. The replacement to Google Page Creator, Google Sites, does not allow for the upload of arbitrary pages.

What I'm personally working on for fun is GAE as the business layer of a traditional 3-tier database-backed web application, perhaps a combined scoreboard and repository for a simple game.. By doing so, I think it will be possible to avoid the issues of Google lock-in as well as most of the benefits of scaling. I think the following components will allow me to avoid google lock-in with minimal expenses.
  • A cheap postgres server
  • Web service for database (PHP?)
  • GAE to talk to the database and host games
  • RIA client to minimize work in GAE
I might just serve up the database on my home connection. Because GAE is the only user of the database, I should be able to just use a non-standard port and hope my ISP does not complain.

The RIA client should consist of static files, most likely compiled Flash or GWT. I suspect that GAE is not the best way to serve up the client, but it's probably the most reliable free web hosting that I have now that Google Page Creator is going away.

Friday, September 19, 2008

Scrum Tools

Pen, sticky notes, task board, burndown chart.

Yes, we have those, and no, they're not an optimal tool set. Over the past year, I have assembled a set of tools that the team and I have found useful for Scrum. Many of them may be obvious, but the list I have here is a compilation of what we have liked enough to keep after a year and would happily reccommend to others.
  • MediaWiki
    • Documentation in a wiki may not be the most organized presentation of information, but it is the easiest to create. My experience with documentation is that the greatest barrier to documentation is to have it at all. Our wiki has lowered the barriers for creating documentation, and the ability to search through it mitigates most hassles. It is much easier to search a wiki for a long forgotten quirk to our software, than it is to interview a team of developers that has not seen that part of the code base for months.
  • Danube Scrumworks Basic
    • Excel is nice, but it isn't always the easiest thing to use for maintaining a living document. Scrumworks improves data entry for stories, generates burndown charts, allows us to link stories directly to the wiki, and provides a permanent record that lends itself well to nightly backups. The most common use we have of the permanent record is to look for old estimates of old stories. While the latest Pro version would be nice, the free basic edition has been enough for our needs. I suspect that if pressed, we would revert to excel rather than pay for Pro.
  • Henrik Kniberg's Card Generator
    • Henrik Kniberg's blog is a little access report that converts spreadsheets into index cards. I use a custom version that accepts the export of a Scrumworks database.

Wednesday, September 3, 2008

Replacing Google Page Creator

While making some small updates to one of my Google Page Creator sites, I discovered that they are phasing out the service. I was sad to hear this, as most of the web sites I have a need to create are usually a handful of pages or less, or is just a small repository for some images. I don't know whether Google is going to start cracking down on this, but it's pretty trivial to create a Google App Engine application that does nothing but serve up static content. An additional benefit of this is that I can make the default URL map to a useful page, instead of having to link to an index page.

Monday, August 18, 2008

More Google App Engine RSS

As it turns out, I had a need to scrape the Schlock Mercenary website in order to gather information after all. I was hoping that the clockwork-like regularlity of its updates would let me generate the RSS feed independently. Thankfully, I was able to request a unique class attribute on the information I wanted, but it gave me an opportunity to mess around with html parsing in python. I initially tried to write a solution using the built in dom parser, xml.dom.minidom. However, this rapidly ground to a halt when I discovered that the minidom library will generally throw an exception if the web page is not valid xhtml, making it too fragile for practical use. After some investigation, I discovered a library with the odd name of Beautiful Soup, which is designed for the purpose of scraping web sites and providing a DOM. I found the library to be powerful, but the documentation to be a little lacking. The syntax is a little odd when it comes to element attributes that are python reserved keywords (id, class, etc.).

The following code scrapes the contents of the table with class='FOOTNOTE' if it exists. If not, it scrapes the next table with a width element after a particular image.

from BeautifulSoup import BeautifulSoup
soup = BeautifulSoup(result.content)
import re;
table=soup.find('table',attrs={'class' : 'FOOTNOTE'})
if table is not None:
return " ".join(str(v) for v in table.tr.td.contents)
table=soup.find(src=re.compile('/comics/schlock%s'%date)).findNext('table', width=True)
return " ".join(str(v) for v in table.tr.td.contents)

Friday, August 8, 2008

Google App Engine powered RSS feeds

I am a huge Schlock Mercenary fun, and I've been looking for a simple project to teach me a bit about Google App Engine (GAE). I originally thought that this might be a trivial use of GAE, but if solves a problem I have had for years. Schlock releases a new strip every night at 8 PM PDT, but offers no RSS feed, partially due to the extreme regularity of the update schedule. I've been wanting to add an RSS feed for his strip, and after talking with the author, Howard Tayler, over the past week, he has been surprisingly receptive. He surprised me with a request to include the images within the feed itself. A mirror of project can be found here, though the real thing is sent through feedburner.

http://schlockrss.emmesdee.com
  • New strips are published at 8PM
  • Today's strip is a link to the front page before 5PM PDT.
  • Today's strip links to the archive after 5PM PDT.
  • All other links go to the archive.
  • The Atom feed does not contain images, but the RSS feed does.
  • The comic consists of 3 JPGs on Sunday.
  • The comic consists of a single PNG or JPG on other days, depending on the amount of shading.
  • Project Wonderful ad integration -- I wanted to add this, but my original approach turned out not to be feasible. The feed would require a new agreement with Project Wonderful, and simply piggybacking off of the existing ad arrangement of the main site would be a violation.
  • Adwords is provided for free once you use feedburner.
This turned out to be a great learning experience for GAE, Atom, and RSS. I was disappointed to learn that Atom has some serious limitations when it comes to including images within a feed itself. What made this truly enjoyable was interacting with Howard. He was very receptive to the feed, and he was able to clarify several things about the site as well as make a few requests that turned this from a trivial project to a true learning experience. I originally created the feed as an Atom feed, as it is more of an open standard, while the RSS format appeared to have some odd ownership quirks. This worked fine at first, until Howard asked me to put images into the feed itself. After much wrestling with the Atom format, I came to the conclusion that putting arbitrary images into an Atom feed was not going to work. Since RSS and Atom are both easy to implement, the solution was to learn more about RSS and implement an RSS image feed. The second quirk came about when I realized that I would have to scrape data off of the web site itself in order to generate the image links. Howard publishes most dailies as PNG, but when he puts extra effort into shading, he likes to publish as JPG. Thankfully, this is as simply as checking the PNG and seeing if I get a 404, but caching the results gave me a good reason to learn memcache.

Wednesday, July 30, 2008

A year of Scrum, but...

I looked recently, and we started doing sprints at the middle of June last year. It's been over a year since I made the big push to add Scrum to our development methodology, and I'm happy with the overall results. Visibility of impediments was immediate, velocity was remarkably stable, and the amount of over-engineering that was done in the past was reduced. I don't want to diminish the gains that we have made, but it is more interesting to talk about what didn't work ideally, than to talk about what went according to plan. I was amused to learn that our software methodology has a name, "scrumbut".

Rhythm
I feel that many of our problems come from a lack of rhythm. Once iterating every two weeks becomes a habit, it is easier to keep doing things correctly. We've fallen prey to extending sprints a day or two "just to finish the stuff that is almost complete", with some of these extending to 3 weeks. While it is painful to reset a sprint when this is supposed to happen, I suspect that the real cause of our lack of rhythm is a lack of priority when it comes to software development.

Support
At my company, we sell a hardware appliance that provides a service. As a result, we have many different units in the field, and our top priority is generally to get any failing unit back online, or at least to retrieve its data.

What this means in practice is that when we arrive in the morning, there may be crises waiting for us that take priority over the daily standup. Similarly, the vast majority of unplanned items throughout a sprint involve support tasks, or one-off customizations. Support intensive sprints are the primary cause for missing our sprint goals.

While I definitely value the support that we do, I have yet to figure out a good way to integrate it well with Scrum. A simple Scrum concept, the daily standup, is problematic for us at times, as there just isn't a good time of the day where everyone can be ready for a standup meeting.

Management buy-in
The good news is that we were given a good amount of freedom when putting together our development methodology. The bad news is that management, in general, was ambivalent about Scrum. Where this caused the most trouble was in the role of Product Owner/Customer. Our VP of Engineering functioned as PO due to the company structure, but he was not provided with clear directions on the future or priorities of the product. As a result, we had a lot of churn as we changed directions several times, that could have been avoided if we had a single decision maker driving the product.

QA Stories
Our QA has always been problematic to integrate into development. We have a skilled QA Engineer, who unfortunately does not have enough background for development tasks, and the situation has led to some specialization of roles. Our story estimates have generally been done by estimating development, estimating QA, and then producing an aggregate number. When QA and development are not in sync, usually because of a regression test, the dreaded QA stories start to crop up. Unfortunately, I don't have a good solution to this problem except by changing team membership, and that is not an acceptable option.

Team dynamics
Scrumbut, as we have implemented it, has not adapted well to personnel changes. Recently, some changes have been made to the organization that have resulted in our team becoming two teams working on separate projects. It has been difficult to see the value of a combined standup, but the new team sizes don't really justify separate standups. Similarly, we have tried to include our summer intern in our standups, but his work has been entirely separate from the core team. I believe that this impediment can be generalized as one of team sizes. Instead of one team, we have become three teams, two of which are too small to justify the use of scrum. What I take away from this is that the guideline of 7 members, plus or minus 2, has been accurate for us when it comes to agile development and ideal team size.

Sunday, June 8, 2008

PMP Lessons Learned

I passed my PMP Certification on June 7! I was proficient in 3 sections, and moderately proficient in 3 sections.

There was an one question that had me completely stumped. I was shown a PDM and asked which of 4 nodes had "bipert float". I had no idea, and just guessed based on which node looked the least like the other three. Google was of no help once I got home, and I'm reasonably sure that this is a test question that contained a typo of some sort.

Studying Recommendations
I took my practice exams from perceived lowest quality to highest quality. Earlier exams used primarily to identify studying points and gauge rough readiness level. Later exams were used as a last chance to reschedule before 48 hour threshold. I found many overviews and study guides online of varying quality, and it was useful for me to read all of them. Each one presented the material differently, giving me more opportunities to truly understand the material rather than memorizing the words. I also found that each study guide took 15 minutes to an hour to read, which was the right good block of time to set aside in a single session.

When reading a study guide or taking a mock exam, it is vital to take notes on which questions or topics are giving you trouble. Once I am done with a mock exam, or immediately when I encounter confusion in a study guide, I would check my books and search online for the answer. Searching online would frequently lead me to additional resources.

PMBoK
  • Some people passed the test without this, but I needed it to brush up on ITTO and Process Groups.
  • While you do not need to memorize the ITTO, it will be easier if you know the following things to quickly rule out choices
    • Whether something is in the ITTO (i.e. Project Status Report)
    • Whether an input or output looks out of place
    • Whether steps are being done out of order
Head First PMP (Andrew Stellman, Jennifer Greene)
  • I recommend browsing through the PMBoK, and then starting with this. Read the PMBoK as quickly as possible to get an overview, and then start with Head First PMP.
PMP Exam Study Guide (Kim Heldman)
  • I recommend others to read Head First PMP before this, unlike myself.
  • I saved this practice exam for last. It is included on the CD, and the interactive, computer-based format of the exam helps to familiarize you for the real thing. The questions were also of good quality.
Oliver Lehmann free sample exams (online 75 question and downloadable 175 question)
  • These were good exams.
TutorialsPoint free sample exams (2x 200 question)
  • I did not like these exams as much as Oliver Lehmann, Head First, or the one included in the Kim Heldman book, but they were good starting points.
Head First PMP free sample exam
  • I liked this one a lot. I agree with some of the other reviews, that suggest that the questions on this exam might be a bit easier than an actual exam. This was my last paper based sample exam.
Unknown PowerPoint:
Preplogic free 15 min study guide
  • I recommend this after reading at least one book.
  • Another good overview
Rita Mulcahy
  • I saw the many warnings of the material, and a cursory browsing of the free material suggested that this was the case.
  • The primary complaint of the material is that her phrasing and word choice does not match the real exam very closely, and that many people find her layout and explanations confusing as a result.
  • Most of the material I had to pay to see, and I was not lacking in free resources.

Friday, June 6, 2008

PMP mock exams

For posterity, here is a record of all of the practice exams that I took, and my notes following each exam. Starting a week before my exam, I decided that I would take at least one mock exam a night, in order to prepare for the real thing.

I made notes after any question that gave me trouble outside of ITTO memorization, and about any trend I saw in the questions vs. my knowledge. Two books, over a year old, cannot cover every topic that might be on the exam, so my hope is that my notes would lead me to supplemental studying materials.

On June 4, I took the Head First PMP Practice exam, which I felt was going to be the most accurate exam. Based on the results of the exam, I decided not to reschedule my June 7 exam date.

May 31: PMZilla practice exams -- N/A
  • Stopped after ~20 questions
  • Only correct answers have explanations
  • Could not understand half of the questions due to grammatical errors
  • Many questions seemed to have multiple correct answers
  • Many answers felt incorrect, but I cannot confirm due to my unsure knowledge of PMP
  • Many questions had a rote answer that was suboptimal.
  • Many questions involved picking the least incorrect of 4 answers.
  • (Score was 32% before I gave up)

May 31: 75 Question Oliver Lehmann -- 62%
  • Very Good Exam
  • Question format matches my two books
  • All answers have explanations
  • Topics to study before the next exam
    • "Management by Projects"
    • FTY and RTY
    • Must study 44 processes
    • Must study ITTO!
    • Must be able to tell a knowledge area from a process phase from a process group
June 1: 100 question exam from ajithn.com -- 57%
  • mediocre Practice Exam
  • contains user submitted content (saw a question from lehmann's 75 question exam)
  • No feedback: Does not tell you what questions are correct
  • Topics to study before the next exam
    • warranties
    • leadership styles
    • PERT details
    • 3 sigma/6 sigma -- memorize 1-6 standard deviation #s
    • Earned Value Method (EVM) details
June 2: 200 question exam (TutorialPoint 1) -- 65%
  • OK Practice Exam
  • Some answers are clearly wrong
  • Questions do not appear to be original (choices are numbers, but correct answer is a letter)
  • Topics to study before the next exam
    • GERT? (Graphical)
    • PERT details (oh, it's the pessimistic/optimistic predictive method)
    • TCPI? (T??? CPI)
    • Design of Experiments
June 3: 200 question exam (TutorialPoint 2) – 74%
  • OK Practice Exam
  • Better of the two
  • Questions still don't seem original
June 4: 200 question exam (Head First PMP) – 80%
  • Good Practice Exam
  • Newer exam, so most questions not stolen by other websites yet.
  • Very clear questions. None with broken english or mixed up terminology
  • Supposedly the wording is trickier on the real thing
  • Answers have explanations provided. Very helpful.
June 5: 175 Question Oliver Lehmann – 78%
  • Very Good Exam
  • Question format matches my two books
  • Answers have very brief explanations
  • Harder than Head First PMP
June 6: Kim Heldman exam from CD-ROM – 80%
  • Taken last due to exam format
  • Book is 4 years old.
  • Questions do not seem dated, compared to Head First and Oliver Lehmann
That's it! Just the real thing left.

Sunday, June 1, 2008

PMP online resources

To complement my favorite books, I looked at many websites filled with information on the PMP exam. Unfortunately, many of them focus on the rote memorization style of learning, and many of them are filled with errors made when copying material from each other. I gained value from the following websites, all of which contained free material.
  • Head First PMP has a companion web site for the book that includes a 200 question mock exam and forums.
  • oliverlehmann.com has a 75 and 175 question mock exam, as well as study materials.
  • preparepm.com includes a 70 question mock exam and a bunch of tutorials.
  • tutorialspoint contains two 200 question mock exams.
  • This PowerPoint showed up when I was searching for information on leadership styles, which was not well covered by my two books.
  • Preplogic has a 15 min study guide that presents another overview of the material
  • gantthead.com contains Project management forums including this interesting thread containing many potentially useful links for exam preparation.
  • pm-professional is a blog with a section on PMP preparation.
  • pmhut contains an interesting article on the value of a PMP certification.

Friday, May 30, 2008

PMP books

Two books have really stood out as good resources for this exam so far. I recommend them both, but I wish I had discovered Head First PMP first. Head First PMP comes with a 200 question exam and an active forum, but both of these are available at the head first website without purchase of the book. The PMP Exam Study Guide came with a CD that included flash cards, an ebook, and a mock exam. I did not use anything from the CD except for the mock exam.


Jennifer Greene and Andrew Stellman's Head First PMP is my favorite PMP book. It manages to cover the basics while remaining very easy to read. It is entertaining as far as these sorts of books go, and encourages problem solving while reading in order to improve understanding of the material. However, a single, illustration filled book is insufficient to properly cover the material for the exam, so I recommend supplementing it with the second book and online resources.
Kim Heldman's PMP Exam Study Guide is the first book I read on PMP, and I actually brought it with me on my honeymoon to read on the plane. Unsurprisingly, the material in here reads like a reference manual. I got through perhaps 60 pages of it during 30 hours of flight, and it has taken two months for me to complete reading this book, partially due to a lack of urgency. While this is a good textbook style reading of the PMP material, I recommend reading this second, as it drills deeper into individual topics.

Monday, May 26, 2008

PMP exam scheduled!

I've been preparing for PMI's Project Management Professional(PMP) certification, to demonstrate the knowledge that I've picked up over my years of experience as a software professional to current and future employers. I have scheduled the exam for June 7, roughly two weeks from now, in order to give myself a solid deadline. The way I work and study, I can't sit down and or feel the urgency if the goal is nebulous and does not feel immediate. I have until 48 hours before hand to reschedule, in case it looks like I'm not ready yet, and I'll be throwing away $405. Over the next couple of weeks, I plan on describing the following topics:

Monday, February 4, 2008

Agile wedding planning?

No, I didn't do burndown charts or sprints. However, we did keep a spreadsheet on google docs with a list of stories and deadlines. Since we also had to handle finances, these stories contained not only an initial estimate and time remaining, but also an estimated cost and money spent.

Everyone went out of their way to tell us how much they liked the wedding, and we didn't go too far over budget, so I'm happy with the results. This has definitely been my most personally rewarding project. :) The only thing I would have changed would have been to delegate a couple more things on the day of the wedding that I didn't expect to be problems.

Wednesday, July 4, 2007

Scrum Books

After trying out scrum for a couple of sprints, we decided to make some changes and go get some guidance. The biggest change for us is 2 week sprints. 1 week simply wasn't long enough to complete a lot of our stories if anything was impeded or unexpected, and generally resulted on each developer working primarily on a single story for the duration of the sprint.

There's a couple of books that stood out as being helpful here, both from Mike Cohn, an active member of the Scrum community.


Mike Cohn's Agile Estimating and Planning
is a primer on putting together a project plan using estimating techniques such as user stories to assemble it.
Mike Cohn's User Stories Applied is my other recommendation. It talks in depth on what a quality story will look like, what questions need to be answered at what level of detail, and how to arrive at a good estimate. One of the more interesting subjects for me was the "Epic Story", which is a very large story with a very rough level of detail, that is a placeholder for a feature set that will not be implemented in the immediate future.

Wednesday, June 13, 2007

Starting Scrum

We've gotten approval from work to implement Scrum at work. We'll see how things go, but I'm excited that the team is willing to give it a shot and try not to come in with too many preconceived notions on whether the recommendations of Scrum are appropriate for our environment.

We've gotten an initial backlog done in Excel, and are trying out 1 week sprints to start.

Monday, January 1, 2001

Welcome to Emmes Dee!

Emmes Dee is a place to store my thoughts. You can blame Carnegie Mellon for giving me this name. After over 15 years, it's second nature. Here, you can expect me to talk about interesting developments as a software architect and project manager. I've recently been enjoying my work enough that I've started working on small projects during my spare time, and I expect to discuss them here.