Scraping iMessage and Messenger Messages and Displaying with Vue Frontend

Credit: she founded the project and provided the first version of the scraper.

A while ago my partner in the organization started message-analyzer because she thought it would be interesting to analyze the message data between us. She managed to scrape text messages out of both iMessage and Messenger (the two chat softwares that we use), put them together, built something that could decide which one of us a messaging is coming from. I believe the highest success rate she got to was 86%.

I was looking around in the project after she got most of it done and noticed this file called app.py that runs a Flask application and serves the text messages on a web server. Since I’m pretty much a frontend developer now (no), I came up with the idea of displaying all of our messages on a web page, hopefully merging contents on both Apple and Facebook platforms.

iMessage

I started with iMessage. It wasn’t too hard to simply take the output of the function that she wrote and serve it over the api endpoint. For the frontend I decided to try out Vue.

It wasn’t long before I got to the following:

The main component simply requests all messages and pass each data to a Message component. I added pagination for some convenience.

Message component looks like this:

It just displays the message content. If hovered, the delivered time is shown as a tooltip.

It all looked good, but how about attachments? There were hundreds of interesting images, stickers and files that we sent each other. It would not be as interesting if those were lost for the web page.

To show attachments, I dug deeper into how Apple stores messages.

Inspired by my partner, Apple stores messages in a sqlite database located in ~/Library/Messages/chat.db, so I took the liberty of looking at the schema.

Three tables caught my attention: attachment, message_attachment_join, and message.

attachment:
    filename
message:
    ROWID
message_attachment_join
    message_id
    attachment_id

The message_id matches with the ROWID on the message table. filename is actually a path to the attachment file on the local machine. With these information at hand, I revised the sqlite query to

After the messages and attachments are selected, I served the attachments over the api endpoint ‘/attachments’, and voila pictures on the page!

I later also displayed reactions to messages but I’d like to get to scraping Messenger soon.

Messenger

Scraping Messenger is a little more tricky: my partner did it by scrolling up all the way to the top, saving the html file and extracting information from there. However, since the data is parsed once already by the Messenger frontend, it was a little difficult to get the dates and attachments as well as the messages.

I went into Chrome devtools and saw that the juicy request was to the url facebook.com/graphqlbatch. Ah so they use their own product. What’s frustrating is that each request at most retrieves ~200 messages, and Chrome doesn’t let me copy multiple request responses at a time.

I tried to reverse engineer how the requests are formatted, but was stuck at figuring out how the message count offset was sent. I came to the idea of writing a Chrome extension to capture the web requests.

The only API that allows you access to response bodies is devtools. Creating an extension is also easy – just need to have a manifest.json file that specifies the extension and some js scripts to be run by the browser, so I did this:

and used pyauthogui from my partner’s code to automatically scroll up like an idiot. I was able to get all messages in the devtools window of the devtools window (no typo). The repository is here.

All that was left was parsing the data retrieved and making sure both message sources end up having the same format when returned by the Flask server. Messenger had more attachment types and multiple attachments so it took me longer.

Due to privacy reasons, I can’t do a demo here :/ well mostly it’s just that I’m too lazy to put up a page with fake message data.

For future features I plan to do searching, improve pagination, style the Messenger system messages (“you waved at each other”), and make the UI prettier and easier to use.

Center a div with unknown width – CSS trick

To center a div, the typical  margin: 0 auto;  css attribute requires a known width for it to work, either a percentage or a fixed number of pixels. I came across this StackOverflow answer the other day that solves this perfectly. Sadly it’s not even the accepted answer 😕

#wrapper {
   position: relative;
   left: 50%;
   float: left;
}
#page {
   position: relative;
   left: -50%;
   float: left;
}

Note that in css, position: relative;  means relative to the block’s “normal” position, so it’s an offset to its original position.

Percentage values for left are interpreted as the percentages of the parent width. In this case, page div should be wrapper‘s only child, so their widths would be the same, hence the left: -50% for page moves the block to the left by half of the width of itself, along with the 50% offset of its parent, it’s centered! 😋

Nginx reverse proxy + Docker mount volume

Today I did two things for my blog project: added a proxy on my Nginx server for the api connection and mounted /data/db directory from the host to the docker container to achieve data persistency.

First, Nginx proxy.

The idea/goal isn’t that complicated. There are three docker containers running on my production machine for the blog, by service names:

  1. blog-api, the api server that listens to :1717
  2. web, the frontend nginx server that listens to :80
  3. mongodb, the mongodb database that listens to :27017

Before today, I had to configure the port in the frontend code, so that the frontend calls the api endpoints with the base url and the port number. If this didn’t bother me enough, for the image uploading, all of the urls for embedding the images in the posts have the port numbers in them, so they look like this:

vcm-3422.vm.duke.edu:1717/uploads/image.png

It is against intuition for the port number to be shown to users, so I began looking for a solution. Nginx turns out to have this reverse proxy configuration that allows you to proxy requests to some location to some other port number, or even a remote server. It’s called “reverse” proxy because unlike “normal” proxies, the nginx server is the “first server” that the client connects to, whereas in other proxies the “proxy server” is the first and the nginx server would be behind the proxy.

With some trial and error, I came to this:

http {
	upstream docker-api {
		server blog-api:1717;
	}

	server {
		listen 80;
		root /app/dist;
		location / {
			try_files $uri /index.html;
		}

		location /api {
  			rewrite /api/(.*) /$1  break;
			proxy_pass http://docker-api;
			proxy_set_header X-Forwarded-Host $server_name;
			proxy_set_header X-Real-IP $remote_addr;
		}
	}
	
	# other configs
}

The upstream server name following the upstream keyword can be arbitrary, but the server name blog-api matches with my docker service name and will serve as the host name for the second hop as shown in the proxy_pass field below.

The location /api block does the proxying. The first line rewrites the request so that the /api part of the url is stripped since “api” is only used for triaging. The next few lines are pretty standard. They basically send along the original headers.

Voila! When I built and up’d my docker-compose services, I can see the blog posts showing up just like before. However, when I randomly tested image upload, I got a Request Entity too Large 413 error in the browser console. Apparently this is caused by the new nginx config, but how?

After some Googling, it turns out that nginx has a setting in HTTP server called client_max_body_size, which defaults to 1M. What’s more, from the documentation it says any request with body larger than this limit will get a 413 error, and this error cannot be properly displayed by browsers! Okay… so in the server block I added a

client_max_body_size 8m;

and everything works just fine 👌.

 

For the docker volume configuration, I came up with this requirement for myself because my mongodb data had been stored in the docker container – it gets lost when the container is removed. To back up the data easier and to have more confidence in the data persistency, I wanted to mount the database content from the host machine instead.

So in my docker-compose.yml,I added this simple thing to the mongodb service:

volumes:
      - /data/db:/data/db

Yes two lines and I included in my post. What can you do about it 🤨

Now I can just log in my production machine and copy away the /data/db directory to back up my precious blog posts data 🙂

 

Reference:

Use NGINX As A Reverse Proxy To Your Containerized Docker Applications

Daily bothering with launchd and AppleScript 

Credit: All of the following “someone” is this one.

This morning I noticed this repo was forked into my GitHub organization. I’m still not sure what the original intent was but I interpreted as a permission/offer to contribute. Since the repo’s name is “simplifyLifeScripts”, I spent some time pondering upon what kind of scripts would simplify lives, or more specifically, my life. I then came up with this brilliant idea of automating iMessage sending so that my Mac can send someone this picture of Violet Evergarden on a daily basis:

Violet

In the past I had to do this manually by dragging the picture into the small iMessage text box, which was simply too painful to do (I blame Apple). How cool and fulfilling would it be to sit back and let the Apple’s product annoy an Apple employee!

After some GOOGLing I came across this snippet of AppleScript that lets you send an iMessage with your account:

on run {targetBuddyPhone, targetMessage}
    tell application "Messages"
        set targetService to 1st service whose service type = iMessage
        set targetBuddy to buddy targetBuddyPhone of targetService
        send targetMessage to targetBuddy
    end tell
end run

Basically it takes iMessage service from the system and tell it to send the message to a person given a phone number.

Since I also have to send an image as attachment, I added to this piece so it became:

on run {targetBuddyPhone, targetMessage, targetFile}
    tell application "Messages"
        set targetService to 1st service whose service type = iMessage
        set targetBuddy to buddy targetBuddyPhone of targetService

        set filenameLength to the length of targetFile
        if filenameLength > 0 then
            set attachment1 to (targetFile as POSIX file)
            send attachment1 to targetBuddy
        end if

        set messageLength to the length of targetMessage
        if messageLength > 0 then
            send targetMessage to targetBuddy
        end if
    end tell
end run

It now takes one more parameter that’s the file name. The script converts the file name to a POSIX file and send as attachment. I also added two simple checks so that I can send text and/or file.

The next step would be to automate the process. Just when I was ready to Google one more time someone pointed me to Apple’s launchd, which is similar to unix’s cron. launchd lets you daemonize pretty much any process. One needs to compose a plist (a special form of XML) file and put it under /Library/LaunchDaemons/, then the daemon would start as one of the system start up items.

Following the official guide, I made the following plist file:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>com.billyu.botherlucy</string>
    <key>ProgramArguments</key>
    <array>
        <string>osascript</string>
          <string>/Users/billyu/dev/simplifyLifeScripts/sendMessage.applescript</string>
        <string>9999999999</string>
        <string>Daily Lucy appreciation :p</string>
        <string>/Users/billyu/dev/simplifyLifeScripts/assets/violet.png</string>
    </array>
    <key>StartCalendarInterval</key>
    <dict>
        <key>Hour</key>
        <integer>0</integer>
    </dict>
</dict>
</plist>

The ProgramArguments key is mapped to an array of arguments used to execute the process wrapped in the daemon. In my case, I just run osascript to execute the AppleScript at the absolute path, with the phone number, text message, and the image absolute path as parameters. The phone number is obviously censored.

The other key, StartCalendarInterval, is a handy way to run the job periodically. Any missing key will be filled with “*” wildcard. In this case, the process would be run every day at 00:00. I later changed it to 22:00 after realizing my computer might be shut down at midnight. Can’t miss the bother window.

To avoid restarting my laptop, after copying the file to the launchd directory I did sudo launchctl load {plist file path} so the daemon would start right away.

I did some testing with sending the message every minute and it worked perfectly. It’s worth noting that this is one of the few things that just worked the first try.

Excited for 10pm tonight! Although someone else might not be.

Image Uploading and Attaching for the Blog Project

Today I have been working on this feature that allows users to upload image as attachments and to include them as Markdown format.

Here’s a demo of the complete feature:

The first decision I made was how I should accept the upload. After several minutes of thinking with my pea brain, I decided to use multer to take file upload from clients. multer will put the file upload in a directory with a randomly generated name to avoid name duplications. It works well with express js in that it sets the ‘file’ property on req with useful properties such as filename, filesize and the path to the file.

Without much further thinking (which later proved to be a mistake), I thought it would be natural to simply serve the directory that has all the uploaded files.

So I wrote these:

API for upload

const express = require('express');
const router = new express.Router();
const fs = require('fs');
const multer = require('multer');
const upload = multer({ dest: 'uploads/' });
const auth = require('../auth');

const { ALLOWED_EXTENSIONS, MAX_SIZE } = require('../config');

router.post('/', [auth.admin, upload.single('upload')], (req, res, next) => {
  const filename = req.file.originalname;
  const path = req.file.path;

  const splitArr = filename.split('.');
  if (splitArr.length === 1 || !ALLOWED_EXTENSIONS.includes(splitArr.pop().toLowerCase())) {
    removeFile(path);
    return res.status(403).json({ message: 'Forbidden file extension' });
  }

  if (req.file.size > MAX_SIZE) {
    removeFile(path);
    return res.status(403).json({ message: `File exceeds maximum allowed size: ${MAX_SIZE / 1000000} MB` });
  }

  res.json({ path: req.file.path });
});

function removeFile(path) {
  fs.unlink(path, err => {
    if (err) console.log(err);
  });
}

module.exports = router;

serve directory:

app.use('/uploads', express.static('uploads'));

frontend’s upload method

onUpload() {
  if (!this.file) {
    this.props.displayMessage('You haven\'t selected any file yet!');
    return;
  }

  const data = new FormData();
  data.set('upload', this.file);
  instance.post('/uploads', data, {
    headers: { Authorization: 'Bearer ' + this.props.token },
  })
    .then(response => {
      const files = this.state.files.slice();
      files.push(response.data.path);
      this.setState({
        files,
      });
      console.log(files);
    })
    .catch(err => {
      this.props.displayMessage(err.response.data.message);
    });
}

This worked fine. However, when I deployed some other minor changes such as progress bar for upload and one click copy path button and hit refresh on my browser – the images were gone! I soon realized that it was because Docker created new containers because of the file changes, and the files on the original containers would be lost unless I do some backup or mount to the host.

That was when I decided to store all of the image files in MongoDB. In this way, the images are staying at the same place with the post contents, which makes backing up easy. It would also be easy to implement because I already had code for other schemas.

With some copy pasta

Schema for images:

const mongoose = require('mongoose');

const ImageShcema = new mongoose.Schema({
  data: Buffer,
  contentType: String,
}, { timestamps: true });

mongoose.model('Image', ImageShcema);

API handler for uploading and retrieving images:

const express = require('express');
const router = new express.Router();
const fs = require('fs');
const multer = require('multer');
const upload = multer({ dest: 'uploads/' });
const auth = require('../auth');
const mongoose = require('mongoose');
const Image = mongoose.model('Image');

const { ALLOWED_EXTENSIONS, MAX_SIZE } = require('../config');

router.get('/:id', (req, res, next) => {
  const id = req.params.id;
  Image.findById(id).exec()
    .then(image => {
      if (!image) {
        return res.sendStatus(404);
      }
      res.contentType(image.contentType);
      res.send(image.data);
    })
    .catch(err => {
      if (err.name === 'CastError') {
        return res.sendStatus(404);
      }
      next(err);
    });
});

router.post('/', [auth.admin, upload.single('upload')], (req, res, next) => {
  const filename = req.file.originalname;
  const path = req.file.path;

  const splitArr = filename.split('.');
  const extension = splitArr.pop().toLowerCase();
  if (!ALLOWED_EXTENSIONS.includes(extension)) {
    removeFile(path);
    return res.status(403).json({ message: 'Forbidden file extension' });
  }

  if (req.file.size > MAX_SIZE) {
    removeFile(path);
    return res.status(403).json({ message: `File exceeds maximum allowed size: ${MAX_SIZE / 1000000} MB` });
  }

  const image = new Image({
    data: fs.readFileSync(path),
    contentType: `image/${extension}`,
  });
  image.save()
    .then(saved => res.json({ path: `uploads/${saved._id}` }))
    .then(() => removeFile(path))
    .catch(next);
});

function removeFile(path) {
  fs.unlink(path, err => {
    if (err) console.log(err);
  });
}

module.exports = router;

I had to also catch ‘CastError’ in GET because the stupid mongoose throws when the param cannot be casted into ObjectId.

Basically what I do when the user uploads is storing the file to MongoDB, deleting the file in FS, and returning the ID of the file in the database. The ID can then be used for the GET api.

I’m also proud to say that this API endpoint is unit tested and (almost) fully covered. No I wish not to discuss the overall 0.2% coverage drop for the repo.

As I said above, I also added progress bar, feedback for copying path, error prompt for invalid files and some other UI features. The GitHub issue is now closed and I now just have to wait for the requester to come back online for my demo :).

Progress on blog rewrite

All right, this is where I say I can actually get something done.

Achievements for the blog project include:

  • APIs for log in/out, posts CRUD, comments CRUD, like/dislike
  • 93% coverage on APIs mentioned above
  • Using React-Redux to maximize data reuse and minimize the number of API calls
  • Using universal-cookie to store the logged in state (okay this might not deserve a stand alone bullet point)
  • Using Docker (Dockerfile and docker-compose) to automate the deployment process.

Today, lucky for you, I’ve decided to talk about how docker-compose in this project works.

Docker is the company driving the container movement and the only container platform provider to address every application across the hybrid cloud.

^ From Docker’s self introduction. What that means for me is that with proper usage, I wouldn’t have to set up production machines with all the dependencies that my project needs whenever I would like to deploy. Ideally all I would have to do is to write Dockerfiles and docker-compose.yml, install Docker and let Docker handle the rest.

In this blog project, separating the backend and the frontend, the dependencies (required on the environment, not the npm ones) are:

  • backend:
    • MongoDB
    • Node/npm
  • frontend:
    • Node/npm (for building)
    • Nginx (for serving)

With these in mind, I was able to write a Dockerfile and a docker-compose.yml for the backend following documentations and random StackOverflow answers online:

Dockerfile:

FROM node:carbon

WORKDIR /app

COPY package*.json ./

RUN npm install

COPY . .

RUN npm run build-server

EXPOSE 1717

RUN ["chmod", "+x", "/app/wait-for-it.sh"]

CMD ["node", "build/server.js"]

docker-compose.yml

version: '3'
services:
  blog-api:
    build:
      context: ./
      dockerfile: Dockerfile
    restart: always
    depends_on:
      - mongodb
    environment:
      MONGO_URL: mongodb://mongodb:27017/blog
    ports:
      - "1717:1717"
    command: bash /app/wait-for-it.sh mongodb:27017 -- node build/server.js
  mongodb:
    image: mongo:latest
    restart: always

The Dockerfile specifies the config for the blog-api container, while the docker-compose.yml tells Docker how my blog-api container relates to the mongodb service container.

Several things to notice:

  • Each Docker container is like a VM by itself, so the WORKDIR is the directory in the container, and when I do a ‘COPY . .’, naturally it copies from the current directory in the host to the current directory in the container.
  • Notice how I copied the package.json file first and npm installed before copying anything else. The reason for this is that Docker uses a layering cache system that is able to reuse previous versions of images if nothing changes in Dockerfile. Therfore if I only change some api route file, I wouldn’t have to wait for the long npm install process again.
  • wait-for-it is a tool to wait for a process to listen to a port before doing something. It has automatic retires that is very useful in this case. I could, however, just let blog-api restart always as is, but this tool doesn’t have as much overhead.

Later I added another Dockerfile for the frontend, which looks like this:

FROM nginx

RUN apt-get update

RUN apt-get install -y curl wget gnupg

RUN curl -sL https://deb.nodesource.com/setup_8.x | bash

RUN apt-get install -y nodejs

WORKDIR /app

COPY package*.json ./

RUN npm install

COPY . .

RUN npm run build

RUN cp -a /app/dist/* /usr/share/nginx/html

RUN cp /app/nginx.conf /etc/nginx/

This image extends from nginx, so the default CMD starts up the nginx server. I need nodejs for building the static files, so I added the couple lines there. The last two lines copy the static files to nginx’s serving directory and my config file to nginx’s config directory.

With the frontend added, I added one more service to docker-compose.yml:

web:
    build:
      context: ./
      dockerfile: Dockerfile-frontend
    restart: always
    ports:
      - "80:80"

This simply links my container for the web frontend to docker-compose so that I wouldn’t have to manually start up every container. Instead, I would only have to do docker-compose build and docker-compose up -d.

I also added automatic seeding for the MongoDB database but I’m too lazy to paste the steps here again so screw you.

This following point is unrelated to Docker, but I spent some time on it and felt like it would be interesting to include here. It is my nginx.conf file. Since I’m building the frontend with React single-page-serves-it-all pattern, I have to make sure that the nginx server returns the index.html file no matter what the sub url paths are. The only exception is that the client is requesting some js or resource file. With this in mind:

server {
    listen 80;
    root /usr/share/nginx/html;
    location / {
        try_files $uri /index.html;
    }
}

It tries to file the file specified in the uri first, before returning index.html regardless. 404 is handled on the frontend by my React application.

For the next step, I’ll be working on attachments to posts as a feature request from this person.

Automated iOS Testing with Appium and wd

Apparently someone checked out this blog during working hours and decided to tell me that I need to do an update.

Last month I was working on this mobile version of GitHub built with React-Native. The framework itself uses the same principle as React, which is component-oriented (is that a real word). However, I had most fun working on the automated testing on simulator and real device with Appium, the JSONWireProtocol implemented by wd, and Mocha.

Appium is a local server which accepts requests for UI testing. It supports languages including Java, Python and JavaScript with the same essential protocol. wd is one of the JavaScript libraries that implements the protocol AND has the most stars on GitHub, which is why I chose to use this one. Appium uses WebDriverAgent developed by Facebook to ‘control’ real devices and simulators. It also uses xcode dev tools to install apps and to inspect elements etc.

Appium has this concept called ‘desired capabilities’, which configures the testing environment. The capabilities I used for this UI test include:

  • udid, the unique identifier of the real device or simulator
  • platformName:’iOS’
  • platformVersion:’11.2′
  • app, the installation package of the app to be tested, .ipa for simulator, .app for device
  • xcodeOrgId, XCode devloper identifier
  • xcodeSigningId:’iPhone Developer’
  • deviceName, duh

here is the documentation for the desired capabilities. One important thing to note is that .ipa is the package extension for simulators and .app is the extension for devices. Methods to obtain them are documented in the README of my repository (link below).

With these in mind, I could start writing the tests. First and foremost, I needed to connect to Appium server with wd API like this:

const server = {
  host: 'localhost',
  port: 4723,
};
const driver = wd.promiseChainRemote(server);
const desired = {
  // desired capabilities above
};
driver.init(desired);

With the mapping provided on wd GitHub repo, I was then able to gradually implement my UI tests.

it('should display every tab', () => {
    return driver.waitForElementByName('Xuanyu Zhou', 6000)
      .then(el => {
        return driver.elementByName('Xuanyu Zhou').text();
      })
      .then(result => {
        result.should.be.equal('Xuanyu Zhou');
        return driver
          .elementByXPath('//XCUIElementTypeButton[contains(@name, "Public Repos")]')
          .click()
          .sleep(200)
      });
});

The above snippet is a mocha test that first waits for an element with name ‘Xuanyu Zhou’ to mount, then inspects the text ion that element. Afterwards, it uses XPath to select a button with name ‘Public Repos’, before it clicks on the button and sleeps for 200 milliseconds.

Apparently XPath’s documentation is hard to be found so it took me several StackOverflows to figure it out.

One neat thing about Appium-Desktop is that it has this inspector feature that lets you inspect the elements on the phone screen by clicking on them, just like Chrome’s developer tools. This is tremendously useful especially for the annoying XPaths.

Another interesting thing to note is that during testing I had this weird bug that although I had username and password inputs as different components, I could not click on them by themselves in the inspector. I also couldn’t select based on their names. It turned out I needed to make the input fields ‘accessible’ with React Native, documentation here. I guess Apple uses accessibility fields to give the elements names in testing. An Apple employee would be helpful here.

Repository

Testing Demo Episode 1:

Testing Demo Episode 2:

high efficiency vector using right value reference and std::move

#include <iostream>
#include <cstdlib>
#include <cstring>
using namespace std;
class Element {
private:
    int number;
public:
    Element() : number(0) {
        cout << "ctor" << endl;
    }
    Element(int num) : number(num) {
        cout << "ctor" << endl;
    }
    Element(const Element& e) : number(e.number) {
        cout << "copy ctor" << endl;
    }
    Element(Element&& e) : number(e.number) {
        cout << "right value ctor" << endl;
    }
    ~Element() {
        cout << "dtor" << endl;
    }
    void operator=(const Element& item) {
        number = item.number;
    }
    bool operator==(const Element& item) {
        return (number == item.number);
    }
    void operator()() {
        cout << number;
    }
    int GetNumber() {
        return number;
    }
};

template<typename T>
class Vector {
private:
    T* items;
    int count;
public:
    Vector() : count{ 0 }, items{ nullptr } {

    }
    Vector(const Vector& vector) : count{vector.count} {
        items = static_cast<T*>(malloc(sizeof(T) * count));
        memcpy(items, vector.items, sizeof(T) * count);
    }
    Vector(Vector&& vector) :count{ vector.count }, items{ vector.items } {
        vector.items = nullptr;
        vector.count = 0;
    }
    ~Vector() {
    }
    T& operator[](int index){
        if (index<0||index>=count) {
            cout<<"invalid index"<<endl;
            return items[0];
        }
        return items[index];
    }
    int returnCount(){
        return count;
    }
    void Clear() {
        for (int i = 0; i < count; i++)
        {
            items[i].~T();
        }
        count = 0;
        items = nullptr;
    }

    void Add(const T& item) {
        T* newItems = static_cast<T*>(malloc(sizeof(T) * (count + 1)));
        int i;
        for (i = 0; i < count; i++)
        {
            new(&newItems[i])T(move(items[i]));
        }
        new(&newItems[count])T(move(item));
        for (int i = 0; i < count; i++)
        {
            items[i].~T();
        }
        count++;
        items = newItems;
    }
    bool Insert(const T& item,int index) {
        if (index < 0 || index >= count)
        {
            return false;
        }
        T* newItems = static_cast<T*>(malloc(sizeof(T) * (count + 1)));
        int i;
        for (i = 0; i < index; i++)
        {
            new(&newItems[i])T(move(items[i]));
        }
        new(&newItems[index])T(move(item));
        for (i = index; i < count; i++)
        {
            new(&newItems[i+1])T(move(items[i]));
        }
        for (i = 0; i < count; i++)
        {
            items[i].~T();
        }
        count++;
        items = newItems;
        return true;
    }
    bool Remove(int index) {
        if (index < 0 || index >= count)
        {
            return false;
        }
        T* newItems = static_cast<T*>(malloc(sizeof(T) * (count - 1)));
        int i;
        for (i = 0; i < index; i++)
        {
            new(&newItems[i])T(move(items[i]));
        }
        for (i = index + 1; i < count; i++)
        {
            new(&newItems[i-1])T(move(items[i]));
        }
        for (i = 0; i < count; i++)
        {
            items[i].~T();
        }
        count--;
        items = newItems;
        return true;
    }
    int Contains(const T& item) {
        for (int i = 0; i < count; i++)
        {
            if (items[i] == item)
            {
                return i;
            }
        }
        return -1;
    }
};

template<typename T>
void PrintVector(Vector<T>& v) {
    int count = v.returnCount();
    for (int i = 0; i < count; i++)
    {
        v[i]();
        cout << " ";
    }
    cout << endl;
}

int main() {
    Vector<Element> v;
    for (int i = 0; i < 4; i++) {
        Element e(i);
        v.Add(e);
    }
    PrintVector(v);
    Element e2(4);
    if (!v.Insert(e2, 10))
    {
        v.Insert(e2, 2);
    }
    PrintVector(v);
    if (!v.Remove(10))
    {
        v.Remove(2);
    }
    PrintVector(v);
    Element e3(1), e4(10);
    cout << v.Contains(e3) << endl;
    cout << v.Contains(e4) << endl;
    Vector<Element> v2(v);
    Vector<Element> v3(move(v2));
    PrintVector(v3);
    v2.Add(e3);
    PrintVector(v2);
    return 0;
}

output:

ctor
copy ctor
dtor
ctor
right value ctor
copy ctor
dtor
dtor
ctor
right value ctor
right value ctor
copy ctor
dtor
dtor
dtor
ctor
right value ctor
right value ctor
right value ctor
copy ctor
dtor
dtor
dtor
dtor
0 1 2 3 
ctor
right value ctor
right value ctor
copy ctor
right value ctor
right value ctor
dtor
dtor
dtor
dtor
0 1 4 2 3 
right value ctor
right value ctor
right value ctor
right value ctor
dtor
dtor
dtor
dtor
dtor
0 1 2 3 
ctor
ctor
1
-1
0 1 2 3 
copy ctor
1 
dtor
dtor
dtor

My Projects

I’ve decided to make a summary of my past projects here. I have spent most of my free time on iOS development, and also explored some web development using PHP and JavaScript. In the past summer I used JavaScript to work on an educational software for a CS professor here at Duke.

I’ve ordered them according to my personal preference:)

  1. DukeCSA

DukeCSA (on GitHub) is the iOS app started by Jay Wang (currently a senior at Duke) to fit the needs of Duke Chinese Student Association. I joined the team around Christmas 2015. It combined many useful functionalities:

  • events post – users can view upcoming and past events hosted by DukeCSA. They can sign up or comment on the events in the app.
  • Q&A – students can ask their peers about life at Duke. This section is like Quora for Duke.
  • Class Database – users can view a massive (1000+) collection of comments on courses offered here at Duke to help them make choices.
  • Crush – users can express their secret admiration to others. If there is a match, both users will get notifications.
  • Web event poster – a web interface for the CSA committee to post a new event. The event will then be saved to our database and all users will be notified. The user does not need to write any code.

short demos:
notification indication

web interface

Read more about iOS projects

 

2. JFLAP web

JFLAP (Java Formal Language and Automata Package) is an educational software about finite state machines, Moore and Mealy machines, Turing machines etc. I worked on building the online version of JFLAP and integrating JFLAP into OpenDSA (Data Structures and Algorithms) project.

The job included designing and implementing the user interface, optimizing and implementing the algorithms and migrating Java version to JavaScript. I learned about formal languages and automata as well as software development.

short demo:

more about JFLAPmore about OpenDSAdevelopment blog, web demo

 

3. 3D iOS games

I also learned about 3D iOS game development. Below are demo videos of them:

Marble Maze – gravity-controlled

Breakout

 

4. Tank Battle

This is a homework project in my software development class, but I treat it more than that. The game features elements such as stone, brick, grass and water. The player needs to protect the base and eliminate enemies. The game also uses permanent storage to present a leader board.

demo:


The design comes from the classic video game battle city.

 

5. Blog Post System

A blog post system written mainly with PHP. Responsive to both desktop and mobile devices. Users are able to view all posts without logging in and post articles or comments when logged in. Data is stored in MYSQL database. APIs are also built for possible iOS app development in the future.

demo: http://billyu.com (It’ll probably be more fun if you could read Chinese)

 

6. Wheeshare

(my first iOS app!). This is an iOS app that promotes sharing among Duke students. I completed this project with grant from Duke CoLab, my current employer.
On the platform, students are able to post their belongings to lend, or to browse through the available items and request to borrow with one click. Students can also easily manage their posts.

 

LeetCode 20. Valid Parentheses

Problem:

Given a string containing just the characters '(', ')', '{', '}', '[' and ']', determine if the input string is valid.

The brackets must close in the correct order, "()" and "()[]{}" are all valid but "(]" and "([)]" are not.

Solution:

public class Solution {
    public boolean isValid(String s) {        
        
        Stack<Character> stack = new Stack<>();
        
        for (int i =0; i<s.length(); i++){
            if (s.charAt(i) == ')' && (stack.isEmpty() || stack.pop() != '(')) return false;
            if (s.charAt(i) == '}' && (stack.isEmpty() || stack.pop() != '{')) return false;
            if (s.charAt(i) == ']' && (stack.isEmpty() || stack.pop() != '[')) return false;
            if (s.charAt(i) == '{' || s.charAt(i) == '(' || s.charAt(i) == '[') stack.push(s.charAt(i));
        }
        return stack.isEmpty();
    }
}