I am a NSERC Postdoctoral Fellow at the LS3 lab of the Ryerson university. I am also a Visitor Researcher in IBM Centres for Advance Studies (CAS) since May 2016.
I hold a Ph.D degree (Aug 2013) in Computer Science from the University of New Brunswick where I earned the best Graduate Thesis Award.
Previously, I was a Postdoctoral Researcher at Data Science Lab of Ryerson University from Aug 2015-Dec 2015, and a Postdoctoral Research Fellow at MADMUC Laboratory of University of Saskatchewan from Aug 2013- Sep 2014.
I am also an adjunct professor of Mazandaran University of Science and Technology since Feb 2015.
Alongside my professional life, my hobbies include playing Santoor, working out, cooking and hanging out with my friends and family.
My research interest are:
My PhD research focused on detecting fraudulent users with misleading purchase behaviour in electronic marketplace. I proposed a computational model of trust which combines the cognitive and probabilistic view of trust and consider different environmental circumstances to evaluate trustworthiness of participants in e-commerce systems. In the extensive review of the literature (published in journal of JTAER) I introduced a multidimensional framework, which articulates essential elements in establishing trust in online communities such as e-commerce, p2p networks, ad hoc networks and cloud computing. In my PhD I also explore the human factors reported in the psychological literature and economic science and propose a cognitive and subjective model for trust evaluation and decision making as a controlling mechanism for different types of environments including cooperative and competitive electronic commerce. Empirical results (published in AAMAS; journal of AAMAS; journal Computational Intelligence showed the effectiveness of quantifying human dispositions and their cognitive features including their competency, honesty and willingness in modelling trust for a cooperative and competitive e-commerce.
For more information, please read my thesis.
One research challenge that has bugged me being widely neglected in the literature of trust models was their naive approach in setting the minimum level of trustworthiness of participants, in other words, the trust value threshold. This is very important, since inappropriately set thresholds would filter away possibly good advice, or the opposite - allow malicious users to badmouth good services. There has been no systematic approach for setting the honesty threshold. I proposed a self-adaptive honesty threshold management mechanism based on PID feedback controller. Experimental results show that adaptively tuning the honesty threshold to the market performance enables honest users to obtain higher quality of services in comparison with static threshold values defined by intuition and used in previous work.
For more information, please refer to How much trust is enough to trust? A market-adaptive trust threshold setting for e-marketplaces.
Between you and me, people are usually act selfishly, specifically when there is no reward or punishment involved. This is more evident in virtual environment when people are able to act anonymously so there is no incentive for them to act truthfully. In other words, in the absence of legal authorities and enforcement mechanisms in open e-marketplaces, it is extremely challenging for a user to validate the quality of opinions (i.e. ratings and reviews) of products or services provided by other users. Rationally, advisers tend to be reluctant to share their truthful experience with others. I propose an adaptive incentive mechanism, where advisers are motivated to share their actual experiences with their trustworthy peers (friends/neighbors in the social network) in e-marketplaces (social commerce context), and malicious users will be eventually evacuated from the systems. Experimental results demonstrate the effectiveness of our mechanism in promoting the honesty of users in sharing their past experiences.
For more information, please read SocialTrust: Adaptive Trust Oriented Incentive Mechanism for Social Commerce.
One of the controversial issue is how much we should allow giant companies to access and use our data. There is no doubt that companies like Google Inc. use our search histories to improve their personalized recommendation and we are thankful for that, but the question is to what degree do we have a control over our shared data? In other words, should we allow them to give our data to other companies without our permission?, and what are the available preventive measure countermeasures? As has been mentioned, concerns regarding privacy arise when sharing user data with unknown third parties. These concerns can be alleviated at two stages: i) ensuring selective control of the applications to share user data with, and ii) monitoring and penalizing errant data consumers who violate the terms of their contractual agreement and potentially abuse user data. We propose a trust management mechanism for monitoring data consumers’ compliance to the contractual agreements for which data was shared with them. The trust mechanism is based on user complaints about suspected privacy violations and is able to identify the data consumers who are responsible. The framework penalizes the data consumer found guilty of violating its data use agreement by decreasing its trust value. This makes the data consumer less likely to be selected to receive user data, and limits its participation in the user data marketplace, forcing it to pay a higher price for purchase of user data.
For those who are interested to learn more, please refer to Trust Mechanism for Enforcing Compliance to Secondary Data Use Contracts,
Working on the funding supported by Ontario Centre of Excellence, I have taken a lead on a research project on news ranking framework for Globe and Mail news agency. While being one of the largest news agency in Canada, Globe and Mail uses the opinion of the expert editors to rank news on homepage which sounds not very efficient and cost-effective.
To attract more visitor to their website, they require to display personalized news for each visitor given their clicking/browsing behaviour as well as their demographic information. At the first stage of the project, we proposed a news ranking model which predict location of the news on the homepage in terms of freshness, the reputation of news source and authors, news generation flow; and keyword importance. In the later step, we intend to incorporate the user models into our framework to provide personalized recommendation to users.
Working with several master and phd students opened a new direction of research for me and I get involved in numerous amazing research lines. The most important ones are summarized as follows:
One of the research that I collaborated is related to mining information from social media for the cold-item problem. While popular products receive many reviews, many other products do not have an adequate number of reviews leading to the cold item problem. We propose a solution outline for the cold item problem by automatically generating reviews and predicting ratings for the cold products from available reviews of similar products in e-commerce websites as well as users' opinion shared in the microblogging platforms such as Twitter. We propose a framework to build a formal semantic representation of products from unstructured product descriptions, user reviews as well as user ratings. Such presentations assist us to measure product similarity and relatedness in a accurate and cost-effective way.
Another research that I served as an advisor is related to detecting life events from twitter based on temporal semantic features. In this research we intend to detect important life events from user-generated social content. Life events, such as marriage, travel, and career change, among others, are difficult to detect because : i) they are specific to a given user and do not have a wider reaching reflection; ii) they are often not reported directly and need to be inferred from the content posted by individual users; and iii) many users do not report their life events on social platforms, making the problem highly class-imbalanced.In this research, we propose a semantic approach based on word embedding literature to model instances of life events.
As part of being IBM visiting scholar, we have engaged in a collaborative research with IBM fellows since Jan 2016; and has developed an advance searching mechanism to find vulnerabilities and security issue from security management systems such as IBM AppScan Enterprise. Our final product showed a significant improvement in security issue retrieval compared with IBM existing search system. We also wrote two patents on the topics of ambiguity resolution for security data, and recommending the most relevant and urgent security issue to the security analyst.
I have supervised couples of Msc students since Dec 2015, working on the field of recommender system. Focusing on the user-generated data in a social media, they proposed personalized news recommender system by extracting implicit and explicit user's interest using word embedding methods and build a user model adaptively. Another research is related to Ad recommender system which enable different businesses to show a personalized ad to social media users based on their interest, demographic information and social structure.
The last but not least is the work that is done in activity recommender system. While users might feel confused about possible activities they can potentially engage in their leisure time, our proposed time-aware recommender system would recommend a range of activities that might be of interest of users by dynamically modeling users in different time slots.