Description
Problem #1
Business Intelligence Reporting using Cognos
1. Download the weather dataset available on https://www.kaggle.com/PROPPG-PPG/hourlyweather-surface-brazil-southeast-region?select=sudeste.csv
2. Explore the dataset and identify data field(s) that could be measured by certain factors or
dimensions. (Follow recorded lecture #18, and synchronous session #18)
Example: In a Sales dataset, you may find a measurable field “total sales”, which could be
analyzed by other factors such as, “products”, “time”, “location” etc. These factors are known as
dimensions. Depending on the data, you may also find possibilities of slice and dice, i.e. analysis
could be possible in more granular level; From total sales by city to total sales by store
3. Write ½ page explanation on how did you select the measurable filed, i.e. fact and what are the
possible dimensions. Include this part in your PDF file.
Winter 2022 saurabh.dey@dal.ca
4. Clean the dataset, if required perform formatting. You can perform the cleaning and formatting
using spreadsheet operation or programming script. If you use program add that in GitLab, if
you use other methods, write the steps in the PDF file.
5. Create Cognos account and import your dataset. Identify the dimensions, and create/import the
dimension tables.
6. Based on your understanding of the domain (please read the information/metadata available
on the dataset source, i.e. Kaggle), create star schema or snowflake schema. Provide justification
of your model creation in the PDF file. At this stage build your schema (dimension modelling)
using a drawing tool like draw.io.
7. In addition to justification, attach screenshot of the model created using Cognos (star schema
or snowflake schema) in the PDF file.
8. Display visual analysis of the data in a suitable format, e.g. bar graph showing temperature
change in terms of a suitable dimension. Add the screenshot of the analysis on the pdf or add a
screen recording of the analysis on your .zip folder.
Problem #2
Sentiment Analysis – Java Program only with no additional libraries. Use the parser you
developed in previous assignment.
1. To perform this task, you need to consider the processed tweets (messages only, ignore other
fields) that you obtained and stored in MongoDB in your previous assignment. If you could not
perform/complete the task, then obtain the processed MongoDb tweets collection by contacting
your TA Manraj
2. Write a script to create bag-of-words for each tweet message. (code from online or other
sources are not accepted)
e.g. tweet1 = “Canada is cold cold. I don’t feel good; but is not bad”
bow1 = {“Canada”:1, “is”:2, “cold”:2, “I”:1, “don’t”: 1, “feel”:1, “good”:1, “but”:1, not”:1, “bad”:1}
You do not need any libraries. Just implement a simple counter using loop.
Compare each bag-of-words with a list of positive and negative words. You can download list of
positive and negative words from online source(s). You do not need any libraries. Just perform
word by word comparison with a list of positive and negative words that you can get from any
online platform. E.g. negative words can be found here https://gist.github.com/mkulakowski2/4289441
3. Tag each tweets as “positive”, “negative”, or “neutral” based on overall score. You can add an
additional column to present your finding.
E.g. frequencies of the matches “cold”=2, “not” =1, “bad”=1 (negative -4), “good”=1 (positive +1). Overall score = -3
Tweets # message match polarity
1 Canada is cold cold. I
don’t feel good; but
is not bad
cold, good, not, bad negative
Winter 2022 saurabh.dey@dal.ca
Problem #3
Semantic Analysis – Java Program only with no additional libraries. Use the parser you
developed in previous assignment.
1. For this task, consider the processed tweets collection that you created in Assignment 4.
2. Use the following steps to compute TF-IDF (term frequency-inverse document frequency)
a. Suppose, you have 50 tweets (messages only) that are stored in 50 JSON arrays. You
need to consider these data points as the total number of documents (N). In this case
N=50
Now, use the search query “people”, “condition”, “weather” and search in how many
documents these words have appeared.
Table 3.1: tf-idf table
Total Documents 50
Search Query Document
containing
term(df)
Total Documents(N)/ number
of documents term appeared
(df)
Log10(N/df)
weather 30 50/30 0.221848749
people 5 50/5 1
condition 10 50/10 0.698970004
b. Once you build the above table, you need to find which document has the highest
occurrence of the word “weather”. You can find this by performing frequency count
of the word per document.
Table 3.2: term frequency table
Term weather
Canada appeared in 30 documents Total Words (m) Frequency (f )
tweet #1 6 2
tweet #2 10 1
: : :
tweet #30 8 1
c. You should print the tweets (programmatically), which has the highest relative
frequency. You can find this by computing (f/m).
*** Just to create tabular structure (table 3.1, 3.2), you can use a 3rd party library
Assignment 4 Submission Format:
1) Compress all your reports/files into a single .zip file and give it a meaningful name.
You are free to choose any meaningful file name, preferably – BannerId_Lastname_firstname_5408_A5
but avoid generic names like assignment-5.
2) Submit your reports only in PDF format.
Winter 2022 saurabh.dey@dal.ca
Please avoid submitting .doc/.docx and submit only the PDF version. You can merge all the reports into
a single PDF or keep them separate. You should also include output (if any) and test cases (if any) in the
PDF file in the proper format (e.g. tables, charts etc. as required).
3) Your code needs to be submitted on https://git.cs.dal.ca/