-
Notifications
You must be signed in to change notification settings - Fork 1
Expand file tree
/
Copy pathresume.html
More file actions
78 lines (65 loc) · 3.32 KB
/
resume.html
File metadata and controls
78 lines (65 loc) · 3.32 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
<!DOCTYPE html>
<!-- saved from url=(0068)file:///Users/williammattern/dev/labs/wmattern0.github.io/index.html -->
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=windows-1252">
<link rel="stylesheet" href="./styles.css">
<title>Bill Mattern - resume</title>
</head>
<body>
<h1>Bill Mattern</h1>
<p>
Big Data analyst with years of experience. Proven ability to create, maintain, and enhance big data systems.
Develops systems using Bash, Java, Python, Git, Cloudera Hadoop, Hive, Cloudera manager, Yarn, Oozie, and Autosys.
</p>
<h2>TECHNOLOGY SUMMARY</h2>
<p>
Java Python SQL Hadoop Hive Git
</p>
<h2>EXPERIENCE</h2>
<h3>TD Bank Mount Laurel, NJ</h3>
<h4>Sr IT Data Analyst January 2019-present</h4>
<ul>
<li>Developed common big data processing frameworks in Python including data migration and delta processing.</li>
<li>Developed Python clients for interacting with schedules, SQLite, Hive, and HDFS.</li>
<li>Documented framework tool interfaces, trained and supported end users of these framework tools to create data
pipeline components.</li>
<li>Coordinated execution of batch processing solutions using Bash scripting.</li>
<li>Supported applications in production, providing insights about root cause and potential resolutions.</li>
<li>Executed build and deploy scripts and validated deployed artifacts.</li>
<li>Planned, coordinated, and supervised deployment of critical big data software projects.</li>
<li>Performed code reviews and helped enforce incremental improvements to development practices.</li>
<li>Consulted on data pipeline application development and made solution design decisions for critical data
pipelines.</li>
</ul>
<h4>IT Data Analyst III April 2017 - December 2018</h4>
<ul>
<li>Created Hive tables and HiveQL queries. Developed and maintained scripts for testing and validation.</li>
<li>Scheduled Hadoop jobs with Oozie and Talend Administration Center.</li>
<li>Created ETL pipelines using Talend Studio components such as tHiveInput, tMap, tSqlRow, and tFileOutput.</li>
<li>Wrote a Java script to automate parts of business metadata tagging process.</li>
<li>Configured ingestion jobs using the Podium tool.</li>
</ul>
<h3>Zip Code Wilmington Wilmington, DE</h3>
<h4>Student Software Developer January 2017 – March 2017</h4>
<ul>
<li>Studied 80+ hours per week learning Java, test-driven development, Agile methodology, and object-oriented
programming.</li>
</ul>
<h3>JPMorgan Chase & Co. Newark, DE</h3>
<h4>Account Opening and Reference Data Specialist July 2012 – January 2017</h4>
<ul>
<li>Acted as the subject matter expert for paperless delivery of client documents and provided weekly reports on
discrepancies between statement delivery and Morgan Online.</li>
<li>Oversaw a team of five counterparts working in India, led weekly meetings over telepresence, and maintained
compliance.</li>
<li>Developed procedures to ensure SEC compliance for client statement delivery, investigated discrepancies between
client expectation and database settings, and trained new employees.</li>
</ul>
<h2>EDUCATION</h2>
<h3>University of Delaware Newark, DE</h3>
<ul>
<li>Bachelor of Science in Finance May 2011</li>
</ul>
</body>
</html>