Search results

1 – 10 of 250
Article
Publication date: 5 February 2024

Yuichi Miyamoto

This paper aims to discuss the significance of teacher authorship (jissen kiroku) developed during jugyo kenkyu. Specifically, it explores the structural conditions of jugyo kenkyu

Abstract

Purpose

This paper aims to discuss the significance of teacher authorship (jissen kiroku) developed during jugyo kenkyu. Specifically, it explores the structural conditions of jugyo kenkyu that enabled the flourishing of jissen kiroku.

Design/methodology/approach

To find how jissen kiroku developed in jugyo kenkyu, this paper settled triad of authors-text-readers as the analytical perspective. Disputes through 1960s–1980s are adequate to inquire because it can elucidate how readers read jissen kiroku, which is typically challenging to observe.

Findings

Jissen kiroku is a powerful tool for semantically preserving, reconstructing and consolidating professional values and knowledge in jugyo kenkyu with deepening connoisseurship. Voluntary educational research associations (VERAs) encourage teachers to write and read jissen kiroku to develop their professionalism, which also helped develop exclusive semantics within the field. These developments were possible due to the public nature of jissen kiroku, disseminated to lesson study (LS) actors, thereby strengthening discussions both inside and outside VERAs.

Research limitations/implications

The paper proposes shift in views on educational science and emphasizes authorship as authority in that professionalism of teaching can be protected and elevated through authoring.

Originality/value

The significant roles of writing practice have not been explored enough. This paper finds the value of authorship in terms of public nature and openness to all teachers which enable the enhancement of professionalism of the LS field.

Details

International Journal for Lesson & Learning Studies, vol. 13 no. 1
Type: Research Article
ISSN: 2046-8253

Keywords

Article
Publication date: 17 June 2024

Zhenghao Liu, Yuxing Qian, Wenlong Lv, Yanbin Fang and Shenglan Liu

Stock prices are subject to the influence of news and social media, and a discernible co-movement pattern exists among multiple stocks. Using a knowledge graph to represent news…

Abstract

Purpose

Stock prices are subject to the influence of news and social media, and a discernible co-movement pattern exists among multiple stocks. Using a knowledge graph to represent news semantics and establish connections between stocks is deemed essential and viable.

Design/methodology/approach

This study presents a knowledge-driven framework for predicting stock prices. The framework integrates relevant stocks with the semantic and emotional characteristics of textual data. The authors construct a stock knowledge graph (SKG) to extract pertinent stock information and use a knowledge graph representation model to capture both the relevant stock features and the semantic features of news articles. Additionally, the authors consider the emotional characteristics of news and investor comments, drawing insights from behavioral finance theory. The authors examined the effectiveness of these features using the combined deep learning model CNN+LSTM+Attention.

Findings

Experimental results demonstrate that the knowledge-driven combined feature model exhibits significantly improved predictive accuracy compared to single-feature models.

Originality/value

The study highlights the value of the SKG in uncovering potential correlations among stocks. Moreover, the knowledge-driven multi-feature fusion stock forecasting model enhances the prediction of stock trends for well-known enterprises, providing valuable guidance for investor decision-making.

Details

The Electronic Library , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 15 December 2023

Yuhong Peng, Jianwei Ding and Yueyan Zhang

This study examines the relationship between streamers' product descriptions, customer comments and online sales and focuses on the moderating effect of streamer–viewer…

Abstract

Purpose

This study examines the relationship between streamers' product descriptions, customer comments and online sales and focuses on the moderating effect of streamer–viewer relationship strength.

Design/methodology/approach

Between June 2021 and April 2022, the structured data of 965 livestreaming and unstructured text data of 42,956,147 characters from two major live-streaming platforms were collected for the study. Text analysis and regression analysis methods were employed for data analysis.

Findings

First, the authors' analysis reveals an inverted U-shaped relationship between comment length and product sales. Notably, comment volume and comment emotion positively influence product sales. Furthermore, the semantic richness, emotion and readability of streamers' product descriptions also positively influence product sales. Secondly, the authors find that the strength of streamer–viewer relationship weakens the positive effects of comment volume and comment emotion without moderating the inverted U-shaped effect of comment length. Lastly, the strength of streamer–viewer relationship also diminishes the positive effects of emotion, semantics and readability of streamers' product descriptions on product sales.

Originality/value

This study is the first to concurrently examine the direct and interactive effects of user-generated content (UGC) and marketer-generated content (MGC) on consumer purchase behaviors in livestreaming e-commerce, offering a novel perspective on individual decision-making and cue utilization in the social retail context.

Details

Marketing Intelligence & Planning, vol. 42 no. 1
Type: Research Article
ISSN: 0263-4503

Keywords

Article
Publication date: 18 May 2023

Rongen Yan, Depeng Dang, Hu Gao, Yan Wu and Wenhui Yu

Question answering (QA) answers the questions asked by people in the form of natural language. In the QA, due to the subjectivity of users, the questions they query have different…

Abstract

Purpose

Question answering (QA) answers the questions asked by people in the form of natural language. In the QA, due to the subjectivity of users, the questions they query have different expressions, which increases the difficulty of text retrieval. Therefore, the purpose of this paper is to explore new query rewriting method for QA that integrates multiple related questions (RQs) to form an optimal question. Moreover, it is important to generate a new dataset of the original query (OQ) with multiple RQs.

Design/methodology/approach

This study collects a new dataset SQuAD_extend by crawling the QA community and uses word-graph to model the collected OQs. Next, Beam search finds the best path to get the best question. To deeply represent the features of the question, pretrained model BERT is used to model sentences.

Findings

The experimental results show three outstanding findings. (1) The quality of the answers is better after adding the RQs of the OQs. (2) The word-graph that is used to model the problem and choose the optimal path is conducive to finding the best question. (3) Finally, BERT can deeply characterize the semantics of the exact problem.

Originality/value

The proposed method can use word-graph to construct multiple questions and select the optimal path for rewriting the question, and the quality of answers is better than the baseline. In practice, the research results can help guide users to clarify their query intentions and finally achieve the best answer.

Details

Data Technologies and Applications, vol. 58 no. 1
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 3 June 2024

Mariam Ben Hassen, Mohamed Turki and Faiez Gargouri

This paper introduces the problematic of the SBP modeling. Our objective is to provide a conceptual analysis related to the concept of SBP. This facilitates, on the one hand…

Abstract

Purpose

This paper introduces the problematic of the SBP modeling. Our objective is to provide a conceptual analysis related to the concept of SBP. This facilitates, on the one hand, easier understanding by business analysts and end-users, and one the other hand, the integration of the new specific concepts relating to the SBP/BPM-KM domains into the BPMN meta-model (OMG, 2013).

Design/methodology/approach

We propose a rigorous characterization of SBP (Sensitive Business Processes) (which distinguishes it from classic, structured and conventional BPs). Secondly, we propose a multidimensional classification of SBP modeling aspects and requirements to develop expressive, comprehensive and rigorous models. Besides, we present an in-depth study of the different modeling approaches and languages, in order to analyze their expressiveness and their abil-ity to perfectly and explicitly represent the new specific requirements of SBP modeling. In this study, we choose the better one positioned nowadays, BPMN 2.0, as the best suited standard for SBP representation. Finally, we propose a semantically rich conceptualization of a SBP organized in core ontology.

Findings

We defined a rigorous conceptual specification for this type of BP, organized in a multi-perspective formal ontology, the Core Ontology of Sensitive Business Processes (COSBP). This reference ontology will be used to define a generic BP meta-model (BPM4KI) further specifying SBPs. The objective is to obtain an enriched consensus modeling covering all generic concepts, semantic relationships and properties needed for the exploitation of SBPs, known as core modeling.

Originality/value

This paper introduces the problem of conceptual analysis of SBPs for (crucial) knowledge identification and management. These processes are highly complex and knowledge-intensive. The originality of this contribution lies in the multi-dimensional approach we have adopted for SBP modeling as well as the definition of a Core Ontology of Sensitive Business Processes (COSBP) which is very useful to extend the BPMN notation for knowledge management.

Details

Business Process Management Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-7154

Keywords

Article
Publication date: 8 April 2024

Matthew Peebles, Shen Hin Lim, Mike Duke, Benjamin Mcguinness and Chi Kit Au

Time of flight (ToF) imaging is a promising emerging technology for the purposes of crop identification. This paper aim to presents localization system for identifying and…

13

Abstract

Purpose

Time of flight (ToF) imaging is a promising emerging technology for the purposes of crop identification. This paper aim to presents localization system for identifying and localizing asparagus in the field based on point clouds from ToF imaging. Since the semantics are not included in the point cloud, it contains the geometric information of other objects such as stones and weeds other than asparagus spears. An approach is required for extracting the spear information so that a robotic system can be used for harvesting.

Design/methodology/approach

A real-time convolutional neural network (CNN)-based method is used for filtering the point cloud generated by a ToF camera, allowing subsequent processing methods to operate over smaller and more information-dense data sets, resulting in reduced processing time. The segmented point cloud can then be split into clusters of points representing each individual spear. Geometric filters are developed to eliminate the non-asparagus points in each cluster so that each spear can be modelled and localized. The spear information can then be used for harvesting decisions.

Findings

The localization system is integrated into a robotic harvesting prototype system. Several field trials have been conducted with satisfactory performance. The identification of a spear from the point cloud is the key to successful localization. Segmentation and clustering points into individual spears are two major failures for future improvements.

Originality/value

Most crop localizations in agricultural robotic applications using ToF imaging technology are implemented in a very controlled environment, such as a greenhouse. The target crop and the robotic system are stationary during the localization process. The novel proposed method for asparagus localization has been tested in outdoor farms and integrated with a robotic harvesting platform. Asparagus detection and localization are achieved in real time on a continuously moving robotic platform in a cluttered and unstructured environment.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 5 March 2024

Yuchen Yang

Recent archiving and curatorial practices took advantage of the advancement in digital technologies, creating immersive and interactive experiences to emphasize the plurality of…

Abstract

Purpose

Recent archiving and curatorial practices took advantage of the advancement in digital technologies, creating immersive and interactive experiences to emphasize the plurality of memory materials, encourage personalized sense-making and extract, manage and share the ever-growing surrounding knowledge. Audiovisual (AV) content, with its growing importance and popularity, is less explored on that end than texts and images. This paper examines the trend of datafication in AV archives and answers the critical question, “What to extract from AV materials and why?”.

Design/methodology/approach

This study roots in a comprehensive state-of-the-art review of digital methods and curatorial practices in AV archives. The thinking model for mapping AV archive data to purposes is based on pre-existing models for understanding multimedia content and metadata standards.

Findings

The thinking model connects AV content descriptors (data perspective) and purposes (curatorial perspective) and provides a theoretical map of how information extracted from AV archives should be fused and embedded for memory institutions. The model is constructed by looking into the three broad dimensions of audiovisual content – archival, affective and aesthetic, social and historical.

Originality/value

This paper contributes uniquely to the intersection of computational archives, audiovisual content and public sense-making experiences. It provides updates and insights to work towards datafied AV archives and cope with the increasing needs in the sense-making end using AV archives.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 25 April 2024

Abdul-Manan Sadick, Argaw Gurmu and Chathuri Gunarathna

Developing a reliable cost estimate at the early stage of construction projects is challenging due to inadequate project information. Most of the information during this stage is…

113

Abstract

Purpose

Developing a reliable cost estimate at the early stage of construction projects is challenging due to inadequate project information. Most of the information during this stage is qualitative, posing additional challenges to achieving accurate cost estimates. Additionally, there is a lack of tools that use qualitative project information and forecast the budgets required for project completion. This research, therefore, aims to develop a model for setting project budgets (excluding land) during the pre-conceptual stage of residential buildings, where project information is mainly qualitative.

Design/methodology/approach

Due to the qualitative nature of project information at the pre-conception stage, a natural language processing model, DistilBERT (Distilled Bidirectional Encoder Representations from Transformers), was trained to predict the cost range of residential buildings at the pre-conception stage. The training and evaluation data included 63,899 building permit activity records (2021–2022) from the Victorian State Building Authority, Australia. The input data comprised the project description of each record, which included project location and basic material types (floor, frame, roofing, and external wall).

Findings

This research designed a novel tool for predicting the project budget based on preliminary project information. The model achieved 79% accuracy in classifying residential buildings into three cost_classes ($100,000-$300,000, $300,000-$500,000, $500,000-$1,200,000) and F1-scores of 0.85, 0.73, and 0.74, respectively. Additionally, the results show that the model learnt the contextual relationship between qualitative data like project location and cost.

Research limitations/implications

The current model was developed using data from Victoria state in Australia; hence, it would not return relevant outcomes for other contexts. However, future studies can adopt the methods to develop similar models for their context.

Originality/value

This research is the first to leverage a deep learning model, DistilBERT, for cost estimation at the pre-conception stage using basic project information like location and material types. Therefore, the model would contribute to overcoming data limitations for cost estimation at the pre-conception stage. Residential building stakeholders, like clients, designers, and estimators, can use the model to forecast the project budget at the pre-conception stage to facilitate decision-making.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Open Access
Article
Publication date: 21 December 2023

Ingo Pies and Vladislav Valentinov

Stakeholder theory understands business in terms of relationships among stakeholders whose interests are mainly joint but may be occasionally conflicting. In the latter case…

1205

Abstract

Purpose

Stakeholder theory understands business in terms of relationships among stakeholders whose interests are mainly joint but may be occasionally conflicting. In the latter case, managers may need to make trade-offs between these interests. The purpose of this paper is to explore the nature of managerial decision-making about these trade-offs.

Design/methodology/approach

This paper draws on the ordonomic approach which sees business life to be rife with social dilemmas and locates the role of stakeholders in harnessing or resolving these dilemmas through engagement in rule-finding and rule-setting processes.

Findings

The ordonomic approach suggests that stakeholder interests trade-offs ought to be neither ignored nor avoided, but rather embraced and welcomed as an opportunity for bringing to fruition the joint interest of stakeholders in playing a better game of business. Stakeholders are shown to bear responsibility for overcoming the perceived trade-offs through the institutional management of social dilemmas.

Originality/value

For many stakeholder theorists, the nature of managerial decision-making about trade-offs between conflicting stakeholder interests and the nature of trade-offs themselves have been a long-standing point of contention. The paper shows that trade-offs may be useful for the value creation process and explicitly discusses managerial strategies for dealing with them.

Details

Social Responsibility Journal, vol. 20 no. 5
Type: Research Article
ISSN: 1747-1117

Keywords

Article
Publication date: 31 May 2024

Farzaneh Zarei and Mazdak Nik-Bakht

This paper aims to enrich the 3D urban models with data contributed by citizens to support data-driven decision-making in urban infrastructure projects. We introduced a new…

Abstract

Purpose

This paper aims to enrich the 3D urban models with data contributed by citizens to support data-driven decision-making in urban infrastructure projects. We introduced a new application domain extension to CityGML (social – input ADE) to enable citizens to store, classify and exchange comments generated by citizens regarding infrastructure elements. The main goal of social – input ADE is to add citizens’ feedback as semantic objects to the CityGML model.

Design/methodology/approach

Firstly, we identified the key functionalities of the suggested ADE and how to integrate them with existing 3D urban models. Next, we developed a high-level conceptual design outlining the main components and interactions within the social-input ADE. Then we proposed a package diagram for the social – input ADE to illustrate the organization of model elements and their dependencies. We also provide a detailed discussion of the functionality of different modules in the social-input ADE.

Findings

As a result of this research, it has seen that informative streams of information are generated via mining the stored data. The proposed ADE links the information of the built environment to the knowledge of end-users and enables an endless number of socially driven innovative solutions.

Originality/value

This work aims to provide a digital platform for aggregating, organizing and filtering the distributed end-users’ inputs and integrating them within the city’s digital twins to enhance city models. To create a data standard for integrating attributes of city physical elements and end-users’ social information and inputs in the same digital ecosystem, the open data model CityGML has been used.

Details

Built Environment Project and Asset Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2044-124X

Keywords

1 – 10 of 250