Science Climate Change Unit Reflection

System Media:

Science Topic Media:

Website:

app.famous.co/climatechange (only works on mobile currently)

Reflection:

Final Thoughts:

Overall, I feel quite positive about this unit. Although I already knew a decent amount about climate change prior to the unit, this project introduced some interesting challenges in the process. However, I wish the instructions and expectations are more clear in general.

Interesting Discoveries:

One thing that was quite interesting was the difference in research predictions and conclusions not only between different papers but also over time. Especially regarding predictions, research on the same topic in 2007 had completely different outcomes than one conducted in 2017 for example. The trend seems to be that, over time, the predictions are becoming worse and worse for the planet which made me wonder why that is the case.

Most challenging moments:

Empathizing with people that may not have the same amount of knowledge or concern about the topic. In making the website, I wanted to make people wouldn’t want to just close out of the website before finishing it. The first prototype was a lot longer and more cluttered making it possibly interesting for people interested about climate change, but a hassle to go through for the main audience of the website: people who don’t care about climate change. So my final version was a lot more concise with only the impactful parts available to read.

Most powerful learning moments:

One part that sticks out to me was when I was ideating my system. Ms. Buollock pointed out that the rewards from my system given to the users shouldn’t have a bigger negative impact on the environment than what it is trying to reduce. It was an ‘a ha’ moment that led me to rethink many parts of my system. I realized I had to think through until the end about the consequences that my actions would have and the difficulty in incentivizing people to take action against climate change.

The most important thing I learned personally:

A lot of people care about climate change, but actions against it aren’t as easy to implement as ‘do this and it’s good’. In my research, I saw a lot of scientific research, discussion, and outrage from a lot of people about climate change. With this seemingly overwhelming concern about climate change, I wondered why big actions aren’t being taken place. After looking around a bit more, I understood that it isn’t that simple. It would require massive amounts of investment, the complete destruction of certain industries and international cooperation; and in our current state, it seems almost hopeless that things are going to change before the point of no return. All that we could do is continue supporting actions against climate change, possibly with something like this unit’s campaigns.

The thing that most got in my way:

WEBSITE PROGRAM. Everything else was up to me to create and perfect. The program I used to create the website was relatively easy to use with a good interface, but it just wouldn’t work sometimes. To be fair, I did use a program that was in open beta; but it was still quite annoying with the constant crashing, freezing, and overall bugginess. I’m glad that I was able to finish it to a satisfying standard.

My biggest strengths and areas for improvement:

I think that my biggest strength was the willingness to invest time into just brainstorming ideas. Although I am satisfied with most steps of this unit, I feel that I had the most success in the ideating part. The concept of my website and system lined up with my goals for the unit and thinking through the details of all three products was very interesting. I think the area I could improve on the most would be visualizing data and keeping things concise. Most of the data visuals I used was kind of boring and I didn’t really visualize the impacts of climate change as much. Also, all of my first prototypes were wordy and hard to read. Although I fixed it later, I want to work on making my explanations as short and impactful as possible

What I would do differently:

I would rearrange my extreme weather infographic so that it gives more emphasis on the impact of climate change. Creating more visuals. Possibly using a different website builder to incorporate the user’s age in order to give it more impact. Remake my science poster with fewer words and more visuals. Also creating an extra piece to grab attention and spark interest in my website would have helped people to approach my booth.

Proudest moments:

When some middle school students came to my exhibition, went through my website, and listened to my explanation showing interest and asking questions about climate change. It kind of made the whole experience feel worthwhile and I wish that more students came to my booth.

May 21th Update

For the past 5 weeks, I have been working on my teaching resource/introduction to neural networks product that I was hoping to create for anyone who’s may be interested in neural networks in the future.

For the first week or so, I looked at other people’s resources and tutorials to get some inspiration and brainstorm on what I want to make. I also looked at some examples of neural networks because I knew that I wanted to park interest using examples.

I revisited the video on 3Blue1Brown that I originally watched to help me learn about neural networks because I thought it was the best place to start. It takes an interesting approach to explain how they work and uses the brain analogy often. It also mentions a lot of the technical terminology that isn’t very beginner friendly but it is a 4-part series that would eventually clarify them. It also has amazing visuals a part of which I used in my own product. However, its way of explaining the network as trying to find patterns and combining them through each layer doesn’t seem like the best way to explain neural networks. Although that was the goal of mimicking human brains, it’s not the most accurate. I felt that explaining sigmoid neurons/layers is unnecessary to teach neural networks.

https://medium.com/machine-learning-for-humans/why-machine-learning-matters-6164faf1df12

This series of blog posts had interesting examples and a lot of great explanation, including a broad range of topics. However, the simple explanations are kind of lost in the long lines of code and in-depth terminology and math that appears if you go through to the chapters.

https://becominghuman.ai/deep-learning-an-eli5-intro-to-neural-networks-baf7b02c1ae5

This blog post is unique in that it uses various visuals and simple logic to try and explain how neural networks work. It uses an example of tow factors and university acceptance to imitate the process of a neural network, but the explanation seems easy to read at best and just flat out wrong at worst. It illustrates a linear function to outline the process of elimination that neural networks go through which is not the case. It does drastically simplify the case for neural networks though, and also briefly mentions other types of neural networks.

Nvidia AI turns sketches into photorealistic landscapes in seconds

https://skymind.ai/wiki/generative-adversarial-network-gan

These were some cool use cases that I found when looking through the internet. These two both use something called Generative Adversarial Networks, which is a relatively new invention that is showing promise of having great use. So, I thought it would be interesting to give newcomers an insight into current developments in the field.

The rest of the time was spent on creating the product. At first, I didn’t have a specific format in mind. Ideas such as videos, interactive data and/or network, blog-style tutorials, etc seemed plausible and I started looking into them further.

Videos seemed like the obvious choice if I wanted to go for audience engagement; however, I lacked the tools and skill to create a good quality video with all the recording and editing. Also, if I made a video, it would contain a lot of static images which is not fun.

I looked into interactive data and/or neural networks that people can play around with since it gives people the freedom to explore. I looked at a few articles and videos trying to find a possible way I could make this product in time, but I could not find one that did what I wanted without having to be an expert at it. Also, using my modular network was out of the picture since it would have taken ages to train and see improvement. 🙁

Then, I decided that I should start writing out some of the information that I want to put into the product since I’m spending a lot of time finding how to do it without actually making anything. So, while looking for possible formats, I made a word document that I can write out the information I want to say.

In the process, I also created a gif that I thought I would need to explain a certain process. It would be time-consuming to make myself and Grant Sanderson from 3Blue1Brown already did an amazing job already in the video I mentioned at the start.

The documents that I had created was basically in the form of blog posts, explaining each part with text one by one. However, after talking with Mr.Beatty and receiving an idea to make a powerpoint file with shapes and animations, I thought that would work much better than blogs.

I started to transfer the data I had in my document onto a powerpoint and tried my best to omit as much text as possible. I also created shapes and animated them to fit the narrative of my explanation. One small fix was that a table I had made was causing too much lag when doing animations so I just took a screenshot and used the image.

A trick I used to reduce the perceived amount of text I had on the powerpoint was to split them into segments and have them appear/disappear after each click. This way, it was easier to give focus to specific things at specific times. As a result, however, some of the text overlaps when not presenting which posed a difficulty when trying to edit.

Right now, I need to work on one of the ‘content’ slides since I haven’t figured out how to do one of the parts yet, and I need to work on the beginning hook/introduction and ending. I also have started to work on a ‘technical/math’ document that people can have for reference if they want to know more about the technical side of neural networks. Also, I need to get feedback from Mr.Beatty and some of my friends to see what I can change. Hopefully, a desirable and presentable product can be finished before the deadline.

April 16th Update – March 18, 25, April 1, 8 Week Update

A lot of progress has been made throughout the 4 weeks that this update will cover but I will try to make it into a concise and example-centric blog post.

The 4 weeks were spent on building my modular neural network as I planned to do before. It did take a bit longer than expected, in part because of a lot of other schedules and some because of a lot of debugging had to have happened to make the process work. However, the standard structure that I created beforehand made the creation process a lot easier to handle and my pre-existing code for calculations were a reference I could come back to when I was stuck.

For the first week (March 18) I mostly worked on creating the setup phase of the neural network. This is where the “structure” of the network would be determined, and it will create a new set of parameters for each of the layers in the network. It wasn’t too hard of a job since it mostly followed the planning I had done before starting, but a few bugs did come through.

For example, this one happened because I made the setup to calculate the dimensions of the network in each layer. However, I did not consider that the fully connected layer could not simply take in a 3d-input. So, I realized I needed to restrict the structure of the network so that there has to be a flattening layer between convolution and fully connected layers.

Other than that, most of the writing and debugging was done in 3~4-ish hours or so. I also wrote the code to save the network and ask the user for the specifics of each layer. However, China Cup made it hard to make much more significant progress other than a bit of work on the forward passing.

Time Lapse

On the second week, the passing of the network was written. The easier ones were written first such as fully connected, squish, and output layers. Then the convolution and pooling layers were written.

One thing that I tried to work more into my code in this project was naming my variables before the actual code. It was especially a problem in my fully connected network where random references to an element in an array made it harder to read and harder to debug. So I laid out the variables and output arrays first before I started to calculate the values.

I also had a problem in slicing arrays using Numpy. Most resources used systems such as arr[3:5][3:5] to use a slice of a 2d array but it took a good half an hour of reading through documents and searching for tutorials to find out it is actually formatted like arr[3:5, 3:5] which looks less confusing on paper but makes absolutely no sense and is much more confusing with variables.

Anyway, after that was over with, I went through each function to check if they were correct and did some trial runs with pre-formatted parameters. All of this took significantly longer 6~7 hours of total work but was an interesting challenge. The concept of using functions as building blocks of other functions used as building blocks for a program also pressured me to try to make my code more efficient. I tried small things such as pre-calculating values that are used multiple times and using functions to save on memory.

On the third and fourth week, the focus was more on the back-propagating and learning parts of the network. This phase was over the break so I was able to invest a lot of time into this part, which was really needed.

While coding for backpropagation, I realized there were a few data points missing from my forward passing stage and fixed a couple of bugs and then tried to run the network with the learning. I realized that the network wasn’t getting much better than random chance and in fact, it was returning the same value each time, so it was converging onto one output.

As seen in the time lapse, I had problems with array shapes, slicing, and array multiplication and addition. So I looked back at the code, tried stuff out, used my testing file to test how features work. And that was just the errors, the learning part still had problems. After hours of staring at the code (which I didn’t timelapse because it is literally just scrolling through the code), I found some mistakes like not dividing the changes by the batch number, inconsistencies in the forward pass and backward pass, etc.

Now I am pretty sure the network is working correctly (hopefully) and ready to move on. I am still considering what to do first, either make a small app so that someone could come and write a number on a screen and it would detect the number or start immediately on the neural network “tutorial”. I think since I have a limited amount of time I will just start the tutorial so that I can schedule myself to at least finish more than 80% of it by the end.

The main challenge to me in this explanation program is who I want my audience to be and what I want them to have learned by the end of the program. I definitely want the audience to be pretty wide and want it to be accessible by most people but I don’t want to limit myself to stating everything everyone else has already said. I also want the audience to get insight and develop more inquiries by the end of the process. So, I’m trying my best to come up with various ideas and goals that I want for this project. I have talked with Mr. Beatty and he gave me some insight on possible approaches to explaining some of the topics, where I could focus on and some packages I could use. I will try my best to continue my thought process and document it for my blogs and myself.

March 18 Update – March 4 & 11 Week Progress

March 4 ~ 10

Before this week, I looked into possible ways to make a program that solves Sudoku. Then I found many sources (only two of which I will list or simplicity) that pointed out that neural networks, in particular, convolutional neural networks,  can be used to solve sudoku.

StackOverflow Question

GitHub Post

In particular, the GitHub post claimed that the network had a decent amount of accuracy. This got me wondering how a neural network would actually be able to learn to solve sudoku and if this accuracy was possible. So, I decided to build a convolutional neural network.

I started by making a few functions not directly related but useful to have in the network. The one below takes in inputs of a 1-dimensional array, and the desired length and height and returns a 2-dimensional array.

Below are two functions that are directly related to the neural network, in particular, the convolution with input lists and kernels layer & the max-average pooling. The activation functions in the convolution layer and the method(kernel size and stride length) of the pooling layer were found in this paper which has empirical data of efficient methods. It may not be perfect, but I decided that it would be better than an arbitrary choice by me.

I also did some (relatively) quick calculations in my notebook calculate how many iterations each kernel has to do, and how much padding is needed at the end of the lists to make it work.

I eventually met an error (or at least one that I was not able to solve right away).

This error was especially confusing since the error message does not give an intuitive answer to what the problem is. I looked around in various functions and creating a new function(pictured below) that checks for the dimensions of functions to find where the problem was.

I finally realized, after spending a few hours, that the problem was much more simple than I expected.

The [outt] of the 7th line was the problem. By adding another [] outside of “outt,” an extra dimension was added to the input list, which messed the functions up. A quick fix was done and confirmed that it was the source of the error. So, I added more layers into the network and tried a few different lists.

Eventually, I was able to get a result like this!

Although some problems arose in the process of making the network (the error did take a long time to notice), I’m quite happy with the progress I have made towards building a more accurate network. I still need to research and learn how to do backpropagation of convolutional networks and more understanding of it in general.

However, while thinking about what I want to use this with in the future, I came across an issue. If I am going to use this network on different types of datasets, I will have to re-code many parts of the network; especially since this also has to connect to a fully connected network. Earlier in the semester, I saw a few programs, in particular, the one provided by TensorFlow, that let users choose their own architecture for the neural networks. It used a modular design, adding one layer at a time. So I wanted to try and make a modular design myself.

The benefits of this would be that I could create a program that I can reuse every time I needed a new architecture for a new dataset, instead of having to spend time changing the code. Some limitations could be that I would need to be more careful when making further changes to the network since it could break any older networks.

March 11 ~ 17

Before I got started working on the modular design, I needed to learn more about convolutional neural networks. I could easily find material on backpropagation such as this pseudo-code demonstration which was the most helpful in understanding how it works. However, another problem arose when looking deeper into convolutional networks. 3d-kernels.

I had seen materials glance over 3d kernels before but never thought it was necessary since most explanations are done using 2d kernels. However, some sources pointed towards 3d kernels being the main type used in convolutions. One of the sources of confusion came from this picture:

I won’t go too much into detail with the problem, but the graphic suggests that it is using 3d kernels, but that would mean they have hundreds or thousands of 3d kernels for each layer. Moreover, the notations used in the graphic is confusing and not at all consistent with any of my understanding of convolutions. Mr.Beatty and I tried our best to crack the secret on this graphic but were unfortunately not able to solve it in the end. I decided that 2d kernels will be sufficient for now and started designing the modular network.

I have, for some reason, planned my network on the table. Now I have learned that writing important things on boards is not the best way to organize and record them.

Anyway, I chose to plan the network first since modular designs always have many moving parts and every part needs to fit in with every other part. The goal is that one list will contain everything required to know the architecture of the network, and another list will contain all the weights and biases and kernels for that network, and both these lists can be entered into just one file and stored. For example, [“c”,[x,y],[3,2,1,4]] would mean: convolutional layer, input dimension of x*y, 3×3 kernels of stride length 2 and zero padding of 1 and this layer will have 4 kernels. So, I want to create a program that reads a list of these structure lists and can build the exact network. Then, when running the network, it can refer to the parameter list from the structure list. Another list is present that is used to store data for backpropagation, but this will be temporary while running and does not have to be saved.

Coming up with an organized “standard” structure for the modular design was more difficult than I expected since I had to make sure that I included all the information required for the network to evaluate the structure and with many moving parts, it took a while to wrap my head around it. I still need to begin working on building the project next week, and I am sure that the coding will also come with a lot of problems, but I am excited to make something that can truly solve many problems, and other people will be able to use. My goal is to finish building this before March 27th and use it on many different datasets using many different structures. I also want to make sure a friendly UI is present so that anyone else willing to play around with neural networks can intuitively use it.

March 2nd Update

Feb 18~24

This week I started to look into what I could extend the neural network into. Since the neural network that I have currently is based on recognizing handwritten digits, I had the idea of looking into image recognition and object detection.

Before I continue with image detection, on another note, I discovered some further information about neural networks in an amazing FAQ that talks about everything neural networks. From types of neural networks, comparisons of backpropagation methods and data scaling to various other resources, datasets, and applications of networks, it was an interesting read and taught me a lot about neural networks (although it was a bit off topic) ftp://ftp.sas.com/pub/neural/FAQ.html.

Now, out of all the sources that I read, this article was the most useful in how image detection works. It goes through how object detection & classification models developed through time and explains how they work.

The first part, image classification works similarly to how a neural network does, just in a bit more complicated way. Using multiple convolutional networks which basically splits up a picture into many parts and run each one through pre-trained parameters which return another set of arrays, and that can go through another neural network and so on until it returns a final output.

localization isn’t much different, it just has to be trained to output multiple parameters and trained on those set parameters.

For detecting multiple objects, it makes a window to input into the network and slides that window along the image until it detects an object. This has various problems, however, that if the window is too big or too small, the window may miss the object; it is computationally demanding to input all of the sliding windows; and that the detected size of the object may be smaller than it actually is.

So, people thought of YOLO (You Only Look Once)

It divides the image into a grid of S*S rectangles and predicts the position of an object of which the center of its bounding box (the rectangle that surrounds the object) lies inside that part of the grid. So each grid cell makes multiple predictions to what the bounding box of the object in that grid cell will be and gives it a confidence score for each prediction. The predictions are made for each of the object types that the network is trained to detect and is predicted by a series of multiple convolutional neural networks. In the end, it uses non-max suppression to remove the bounding boxes with low confidence.

This method is good for multiple reasons, the main one being that it takes in the full image as the input just once, so it is much faster than other object detection systems and consequently, have been used for real-time video tracking. Some limitations include the fact that it struggles to find two objects that are close together and therefore are in the same grid box. There have been solutions to this problem called anchor boxes which have been implemented in newer versions.

By this point I was having seconds thoughts about building this myself since it looked like it was going to require me at least until the end of the semester or even more to complete and train and test the network. So I had the idea of using a pre-trained library from the internet to detect the digits, and I will use the output of the network to build a tool. I was going to try and build a tool that would take in the image of some handwritten notes on paper and turn it into a digital document.

 

Feb 25 ~ Mar 2

After looking around the internet and downloading and failing to use the GitHub libraries,  I found a website from the original creators of YOLO and decided to stick with it. However, I again had trouble using the library from the instructions provided from the website. So I looked around more and found this GitHub post, which outlines a step by step terminal command instructions to get the library to work on OS X. After some time of still struggling to get it working, the network started to work when I tested it with some images.

After this experience, however, I had spent some time to reflect on what I was doing (probably because I was kind of frustrated by this point). I realized that continuing with this project isn’t going to help me to learn anything new, or even be super enjoyable. This was because of a few reasons. First, I realized that the network was not trained to detect digits. So I would have to either find another library and get it to work, or train the network myself, which is a challenge in and of itself. Second, even if I got it to work, the rest of the process doesn’t seem like it would be a learning experience. I would have to just take in the positions of letters, sort them, and turn into a digital text file. I wouldn’t be extending my knowledge of neural networks or image processing, it would just be a relatively simple coding challenge, which I would rather do in my spare time.

Then, if I wanted to extend my knowledge of image processing and object detection, I could take on building a network myself. However, after learning about object detection and YOLO, it just seemed like a much larger version of my current neural network with some added complexity and required computing power. Another factor is that this had already been done several times by other people.

So, it came down to this. Object detection isn’t worth creating it myself since the time and effort required to make it isn’t worth what I would learn from the experience, and it isn’t something new to the world. Then, using an already built version to create a tool does not help me learn more as a programmer. And finding an actual practical use for this technology is difficult. I decided to put this to the side for now and move on.

I think what I gained from the two weeks isn’t terrible. I learned a lot about object detection and about more complicated and practical neural networks than the one I built, all of which were very interesting to research and think through myself. And I feel like I may be able to come back to this someday, maybe building it from scratch as a project or when I see a case where this technology could be useful, I have the knowledge to use it.

I had a discussion with Mr.Beatty on Friday that helped me finalize my decision and come up with some ideas about where I will head in the future and how I can plan to maximize what I will gain from this experience. Some ideas included:

  • Starting to work on the puzzle solving programs that I had planned to do after spring break. Sudoku, Block Sliding, maybe some NP problems (which would include sudoku)
  • Learning a new language like C or Java so that I can work on different platforms and projects.
  • Optimizing the neural network I currently have from the FAQ I mentioned earlier. Implementing different backpropagation methods, managing lists better, sorting data.
  • Learning more of the computer science aspect of programming. Learning optimization of code (minimizing the number of calculations), optimization of storage space, machine code, compilers etc.
  • Using programming to visualize math problems, proofs, and cool things.
  • Making a tool for beginning programmers that are new to text-based programming (possible use in intro to programming course?). Creating a basic GUI for processing. Visualizing lists, classes, objects, variables, and functions and organizing them. Making a library with functions closer to how movement works in block based.
  • Using web scrapping and python to make tools for students. Automatically adding homework from DX to the calendar. Dragon’s gate announcements, resources downloads, DX uploads, anti-procrastinating… Maybe a tool for teachers too.
  • Natural language processing…? Just an idea that seems interesting but a big possibility of turning into the object detection situation.
  • 3d graphics?
  • Basic physics engine?
  • A neural network for taking news headlines or youtube video titles and suggesting ones most likely to be interesting to the user?

I have a lot of ideas and I will try to settle on one by next week hopefully. This time, I want to plan on what exactly I want to do and research how I will do it and what I wish to gain from it before starting to dive deep into the subject. Therefore, I can decrease my chances of having this happening again. Hopefully, in my next blog post, it will contain my plan as outlined above.

February 15th Update

Jan 28 ~ Feb 3 Progress

I downloaded the data set of pixel values and answers from the internet and read the README to find out how to use the data in the program.

Then I created this code to run the network by going through each layer and getting the output of the product of the input and the weights plus the biases. I used vectors and matrices to compute this efficiently.

Then the get_cost() function that takes in the final output and the answer number and returns the cost of the function.

I also watched some of the videos below to understand how to do backpropagation, how to compute the gradient descent, and how to implement it.

After that, at the end of the week and on the weekends I created smaller versions of the network and attempted to code the backpropagation mechanism. I also made sketches on the notebook to help outline the lists and variables that I need in the process and what indexes I needed to use.

The picture on the left is the mini-network, with 3 inputs 1 layer with 3 neurons and 2 outputs. Each column in the picture on the right is a list that will contain values to calculate the gradients for each weight and biases.

The first column contains the derivatives of the cost function for each output. It contains 2(a-1) and 2(b-1). The second column is the derivative for the sigmoid function in each neuron. The third column is the derivative for each input of the neurons. The fourth column labeled w is the derivative for each weight.

Backpropagation works by calculating the effect -gradient- of each weight and bias on the cost function. So, you go backward through the network and calculate how much each step affects the final cost function.

Feb 4 ~ Feb 10 Progress

This week was Chinese new year break and after I got the mini-network to do backpropagation after many lines of code. I changed the number of outputs to 3 outputs to test that it scales up accordingly

After that, I implemented the backpropagation to the bigger network and tried to make it work (however, I will note that it was quite hard to actually know if it worked correctly since it has so many parts that I can’t calculate myself and compare them.) Some minor errors occurred but I got them fixed quickly.

Then I worked to actually implement the changes calculated in the backpropagation to the network to make it learn. From the sources I read, it said that it is efficient to implement the changes in mini-batches of random cases. This is so that the network does not favor one of the numbers but also does not have to go through 10000 cases just to create slight changes.

Then, I tested with multiple examples and printed the cost of the network each time to make sure it was working correctly.

Although the cost went down in the first 10 cases, it would plateau after a certain point. So, I created a checking system that takes in an input, and spits out the output and also prints the actual answer.

From doing multiple test runs of the system, I saw that the system clearly prefers to output the answer ‘0’, so I went through the code line-by-line to see what could be causing this.

I found that the code I used for the mini-network arbitrarily set the answer of any case to 0, so I got that fixed and trained the network some more. On Sunday of this week, the network guess the majority of the answers correctly

Feb 11 ~ Feb 17 Progress

Since the network is now working I decided to find a way to store the data of the current network somewhere so that the network does not have to be trained every time you run the program. I found out that Python 3 has a built-in module called pickle that can store values in a .pickle file, so I read quick tutorials online and created save & loading code for the network.

In this process, I decided to create a 2d-list containing lists of file names for each network so that you can load in the appropriate files easily. I also created a ‘save-as'(pas) feature and a ‘create new’ feature.

While creating these features, I thought I could create a small user-interface to navigate and use the network’s features better.

Features include: train(t) that trains the network x* many times based on the number that you give it / check(c) that checks the network for one case based on the answer / check_multiple(ch) that checks x* many times and returns back the accuracy of the network / save and save_as (p,pas) that saves the network / delete(del) which deletes a saved file names of a network from the list / stop(s) to terminate network.

Finally, I wrote a few comments on each function and a few loops so that it is easier to read and understand for me and others.

Now I am thinking about where this could have a possible use around me or the school and I am going to try to implement the network for that use.

Passion Project – Spring 2019 Planner

This is my hope for the future for my Passion Project class. It somewhat feels a bit tight but I was pleasantly surprised by the speed at which the network was built, to be honest; and I know that work for other classes is most likely going to increase in the coming weeks, but I felt that I should set my goal to be pushing me further.

January 21st Progress

This week I finished the udemy python tutorial that I had been working on the past few weeks.

Some of them are basically just test/playground files. Some of them are useful programs and small games; which are blackjack, tic tac toe, the collatz conjecture ‘simulation’, and the sieve of Eratosthenes(which is for listing primes under a certain range). I also made a program to solve a math probability problem under the name Lily(or one of the files with a variation of that name). Overall, after this experience, I feel much more comfortable working in python: creating basic classes and objects, manipulating lists etc. However, I do feel a bit slow in python but I guess that will get better over time.

I also started – an attempt – to build a basis neural network. I got carried away when “trying stuff out” so I spent extra time this weekend to build the basis of the system. It going to be the same neural network as the one here:

However, at the time of writing this(Jan 27) some problems arose. Firstly, I can’t seem to find a simple way to store variable-value information between each ‘run’ of the program in python, which would undermine the “learning” part of the network. Second, I haven’t found out how to input the pixel color values into the programs from an image file. Lastly, I haven’t figured out how I am going to do any of the back-propagating required for the network to actually learn efficiently.

I have, in the course of doing this, also watched 3blue1brown(the creator above)’s series of videos on linear algebra, so that I understand what I am programming when I do vector calculations in python. That is here:

I have a lot more work to do if I am going to make anything out of this… so more work I guess.

Other ideas I thought of in conjunction with neural networks are stuff like voice recognition or figuring out things like emotion, topic or others from texts; so text analysis I guess.

January 7th &January 14th Progress

This week I have mostly been working on learning basic python and its syntax and also brainstorming goals for my project.

Python stuff:

https://www.udemy.com/complete-python-bootcamp/

  • This tutorial helped me get used to the syntax of python, using classes, manipulating lists. Now(Jan 23rd) I have made a tic-tac-toe game and a blackjack game

https://www.cheatography.com/davechild/cheat-sheets/python/

  • Just a basic sheet with syntaxes in case I forget.

https://automatetheboringstuff.com/#toc

  • Syntax, Information manipulation. It showcases a lot of different and interesting ways to use lists, dictionaries and introduces some new concepts like regular expressions.

Brainstorming:

Neural networking:

http://neuralnetworksanddeeplearning.com/chap1.html

  • This website was from the video below. It explains how neural networks work and a lot of the math behind it. I used it more as an add-on to the video but I suspect that this article will be more useful when I start making one.

http://colah.github.io/

  • This also has a lot of math and complicated concepts in it, but a few pages were an interesting read nonetheless.

3Blue1Brown’s amazing video series:

Sudoku(?):

https://stackoverflow.com/questions/6963922/java-sudoku-generatoreasiest-solution

  • A forum about solutions to a sudoku-generating program. Led to the Wikipedia article below but didn’t understand a whole lot. The code was all in java so I had some difficulty reading the code. Still was interesting though.

https://en.wikipedia.org/wiki/Dancing_Links

(links reduced to sources)

Rubiks Cube:

and this guy’s channel in general Code Bullet:

https://www.youtube.com/channel/UC0e3QhIYukixgh5VVpKHH9Q

  • He makes algorithms to play games and solve puzzles. He does some neural networking type stuff too. But a lot of his solutions seem like he’s using brute force for specific situations instead of a general solution. This Rubik’s cube isn’t one though.

Other ideas:

A few game ideas

Visualizing Math (3Blue1Brown style) and also 3D visualizations

Simulations (Virus/scientific simulations) + (Physics engine) + (random other simulations)